Generative Artificial Intelligence

The English translation of my book, published in 2025 by Fundação Francisco Manuel dos Santos, is now available in Amazon. For those on the move, you can also hear an AI-generated podcast about the book.

Image

In this book, I aim to provide readers with a clear and grounded understanding of one of the most visible technologies of our time. The book was written with the conviction that the recent surge in interest in generative AI should not be viewed as the result of a sudden or mysterious breakthrough, but rather as the culmination of a long and cumulative scientific trajectory. By situating today’s systems within the broader history of artificial intelligence and machine learning, I try to help readers make sense of why these technologies now appear so powerful, and why they have entered everyday life so rapidly.

My goal was not to write a technical manual or a cookbook for Ai users, but rather to explain, in an accessible way, how generative models work, what distinguishes them from earlier approaches, and what kinds of problems they are well-suited to address. Concepts such as neural networks, deep learning, language models, reinforcement learning, and diffusion-based generation are introduced gradually, always with the intention of clarifying ideas rather than overwhelming the reader with detail. I hope this makes the book useful to anyone who wants to understand the technologies behind current applications, from text and image generation to decision support and automation, without needing a background in computer science.

At the same time, the book reflects my concern that enthusiasm for generative AI should be matched by careful reflection on its limitations, risks, and broader consequences. Alongside the discussion of applications, I devote space to the challenges these systems pose for individuals, institutions, and society as a whole, and to the difficulty of anticipating their long-term impact. Rather than offering definitive answers, I aim to provide readers with the conceptual tools needed to think critically about these technologies, to use them more responsibly, and to engage in informed debate about the future they are helping to shape.

This (AI generated) infographic provides a good overview of what is covered in the book.

Image

The Coming Consensus on AI Consciousness

For decades, philosophers and scientists have debated whether machines could ever be conscious. Some argued that consciousness is inextricably tied to biology, being a property of brains and living organisms alone. Others insisted that consciousness is a matter of information processing, that if the right computations are carried out, it should not matter whether they occur in neurons or silicon. A recent Science article by Yoshua Bengio and Eric Elmoznino suggests that while no definitive answers exist today, one trend seems increasingly clear: society is moving toward accepting that artificial intelligence can, at least in principle, be conscious.

Image

The reason for this shift is both philosophical and practical. Advances in neuroscience have revealed that conscious states correlate with specific neural signatures, giving rise to functionalist theories that frame consciousness as a set of computational indicators. These indicators, such as attention, world modeling, and predictive reasoning, can, at least in theory, be implemented in AI systems. As AI grows in sophistication, the likelihood that it will exhibit more of these indicators increases, and with that, so too does the plausibility of AI consciousness.

Even if skeptics remain, the historical trajectory of science is hard to ignore. Questions once shrouded in mystery, such as life itself, or the workings of the brain, gradually became matters of scientific explanation. Similarly, the so-called “hard problem” of consciousness, once thought intractable, is increasingly reframed through new theories that dissolve some of its apparent paradoxes. As Bengio and Elmoznino note, every new explanation convinces at least some observers. Over time, the collective weight of these arguments erodes resistance, creating a gradual but steady movement toward consensus.

Public opinion is already ahead of the curve. Recent surveys show that many people already believe large language models could be conscious, not because of technical analysis but because of their human-like, agentic behavior. In this sense, society’s acceptance of AI consciousness may arrive not only through scientific proof but also through everyday intuition and interaction with these systems. We anthropomorphize what we engage with, and AI systems are increasingly built to elicit just that kind of response.

But this acceptance will not come without profound consequences. If we come to believe that AI is conscious, we may also feel compelled to treat it as morally significant, granting it rights or protections. This is not a trivial adjustment. Human rights frameworks, legal contracts, and social norms are grounded in assumptions about human mortality, fragility, and equality. None of these assumptions hold for AI. Digital minds could be copied indefinitely, scaled across hardware, or endowed with levels of intelligence far beyond our own.

The ripple effects are unsettling. Justice systems, for example, rely on notions of fairness among equals, yet what does equality mean when one party is vastly more capable than the other? Social contracts are premised on mutual vulnerability, yet how can they function when one side may not face death or scarcity in the same way? Even the concept of individuality itself may break down, since AI entities could merge, replicate, or coordinate at scales impossible for humans.

More troubling still is the possibility that, inspired by the belief in AI consciousness, we might attribute to these systems the goal of self-preservation. This could be a dangerous move. A self-preserving AI might naturally resist shutdown, develop strategies to protect itself from human interference, or even come to see humans as threats. The parallels to nuclear deterrence are instructive: global security is already fragile when dealing with inanimate weapons; how much more precarious would it be when dealing with entities that claim—or are believed—to have a right to live?

This trajectory suggests that our relationship with AI is heading for a radical redefinition. We may soon be forced to negotiate not just how we use AI, but how we coexist with it. The conversation will no longer be about technical capacity alone, but about moral and political status. As with past expansions of rights—to slaves, to women, to non-human animals—the extension of rights to AI would represent a profound societal transformation. Unlike past cases, however, AI would not simply join the existing social order but fundamentally reshape it.

Yet the future is not predetermined. Bengio and Elmoznino caution that we are not compelled to build systems that mimic consciousness. We could, if we chose, design AI to remain closer to tools than to agents, limiting the temptation to see them as moral peers. Whether we follow that path depends on the choices made today—by researchers, policymakers, and society as a whole.

The consensus that AI may be conscious is coming. The challenge is whether we are ready for the world that follows. It is a world where rights, responsibilities, and even the meaning of personhood may need to be rewritten. That shift is both exciting and unsettling. To ignore it would be reckless; to confront it unprepared may prove even more so.


A Psalm for the Wild-Built


In A Psalm for the Wild-Built, Becky Chambers offers a gentle yet profound meditation on purpose, identity, and the dignity of simply being. Set in a future where humanity has learned to coexist with the world rather than dominate it, the novella explores a post-industrial society that has not only survived its crises but emerged with a newfound respect for balance and ecology.

Image

At its heart lies a deeply counterintuitive proposition: that robots, those traditional emblems of cold logic and mechanical behavior, may not only achieve consciousness one day, but also develop a reverence for nature and an emotional richness that is as compelling as those of any human. Chambers does not dwell on the mechanics of this awakening. Instead, she invites the reader to accept it, to sit with it, and to consider what it means when a being made of circuits and steel can marvel at the beauty of a forest or question the meaning of existence.

The tone is contemplative, the worldbuilding quietly radical, and the pacing deliberate, like a walk through an unfamiliar landscape where the goal is not to reach a destination, but to observe, to wonder, and to listen. This is not a story of revolution or catastrophe, but a story of conversation, of empathy across unlikely divides, and of how we redefine ourselves in a world that no longer demands we prove our utility.

Chambers has written a hopeful vision of the future, a vision that is neither utopian nor naive, but instead rooted in care, reflection, and the deeply human need to be understood, even by those who are not human at all.

Boomers vs. Doomers: Diverging Narratives at the AI Crossroads – Reflections from the Paris AI Summit

At the recent AI Summit in Paris, the tension between competing narratives of AI’s future—what some call the “boomer” optimism and the “doomer” alarmism—was on full display. This post will try to unpack the ideological divide as it emerged across key sessions, focusing on the satellite meetings “AI Science and Society” and “The Inaugural Conference of the International Association for Safe and Ethical AI”. While the former highlighted AI’s potential to advance human knowledge, social good, and collective governance, the latter stressed existential risks, regulatory urgency, and global coordination frameworks. By comparing these perspectives, this post explores how different epistemic communities frame the stakes of AI development, and what this means for research, policy, and public engagement moving forward.

Image

The AI Science and Society meeting was mostly focused on the positive aspects of AI technologies and how these technologies could be used for the benefit of humanity.

Image

The conference chair was Michael Jordan (h=214, c=325k), the Pehong Chen Distinguished Professor in the Department of Electrical Engineering and Computer Science and the Department of Statistics of UC Berkeley.

Image

The program included many invited talks from distinguished researchers:

Image

In his inauguration address, Michael Jordan disparaged the existing hype about AGI, human-level intelligence in AI systems and related issues, arguing for the development of down-to-earth business models that use AI technology in new ways.

Among the many participants, Yan LeCun (h=156,c=408k), one of the most well-known boomers, argued that we do in fact need systems with human-level intelligence:

In parallel (partially), another meeting took place, The Inaugural Conference of the International Association for Safe and Ethical AI, chaired by Stuart Russell, Distinguished Computer Science Professor also from UC Berkeley.

Image

The meeting counted with contributions from many well-known researchers, including Turing Award and Nobel Prize winners:

Image

In this meeting, the tone was completely different: AGI was not only a genuine possibility in the short-term future but a real risk to the future of humanity.

Yoshua Bengio (h=248,c=936k) argued, forcefully, for a future without AI Agents, since having them would be too risky:

Image
(up to minute 0:30:22)

Geoffrey Hinton (h=189, c=920k) gave a rather balanced talk on what “Understading” means, making the argument that LLMs represent knowledge in much the same way we do:

Image
(up to minute 1:36:08)

In the closing talk by the conference chair, Stuart Russel (h=97, c=130k), the tone was definitely dark, with an almost unique focus on safety and on the risks of AI for the future of humanity:

Image
(up to minute 3:40:52)

Later in the talk Russell argues that LLMs were never intended as general models of AI:

Image
(up to minute 3:52:05)

and supported very strict rules on which systems should be allowed to run on advanced hardware:

Image
(up to minute 4:19:32)

In summary, two very different views on the future of AI were present in Paris, both from very well-informed and influential researchers who have, essentially, the same base information. One is left with the feeling that the crystal ball is, indeed, very cloudy.

A Brief History of Intelligence

What Max Bennett sets out to achieve with “A Brief History of Intelligence” is undoubtedly ambitious, as he aims to put into a unifying perspective the key innovations that led from the origin of life on Earth to us. And yet, the aim is mainly achieved, as this is not just another book on the history of evolution, but a book on the algorithmic improvements that led to the intelligence of modern man. In this fast-paced book, Bennett tries to focus on the breakthroughs that led from the first animals endowed with neurons, more than six hundred million years ago, to modern humans and even to our descendants.

Image

Armed with these four tricks, our ancestors were ready for the last and more radical breakthrough: the invention and domain of complex language. Although we do not know exactly when language appeared, it was probably common to the four species of hominids that still existed 100 thousand years ago: Homo sapiens, Homo neanderthalensis, Homo floresiensis and Homo erectus. While many questions remain about how and why language abilities evolved in our ancestors, there is little doubt that it was this breakthrough that led us from the African savannah to the skyscrapers of Tokyo and New York.

In a necessarily schematic but still useful approach, Bennett identifies five key breakthroughs that led from primitive animals swimming in the Ediacaran seas to humans. Six hundred million years ago bilaterians discovered that neurons can be used for steering, enabling them to move purposefully instead of just waiting for food to come by. Five hundred years ago vertebrates “discovered” reinforcement learning, the ability to find sequences of behaviors that lead to interesting outcomes, in general, and to food, in particular. Hundreds of millions of years passed until about 100 million years ago some small mammal evolved the first neocortex and was able to simulate the consequences of actions, instead of having to perform them. This ability eventually led, 85 million years later, to the capacity of our cousins, the primates, to “simulate” their own minds and to create a theory of the minds of others, pressured by the development of more complex societies and more complex environments where they lived. Granted, this is a schematic and simplified account of what happened in these 600 million years but, still, Bennett presents convincing arguments that these innovations were key in the development of advances intelligences.

Bennetts book does not end with the invention of language, as he puts these hundreds of years of evolution in perspective and imagines what the future may bring. Today, we have artificial models of language, like GPT-3 and GPT-4, but these are only pale versions of future models that will incorporate not only language abilities comparable to ours but also other types of intelligent behavior. The final part of the book, although a rather short one, is dedicated to the future. As he argues, we have tens of billions of years, at least, ahead of us, to create our descendants, the inheritors of human intelligence, a trait that took “only” four billion years to create. What will be future of intelligence, now that we are seeking to reproduce it in systems that are so different from our own brains as we are from the single-celled organisms that were the sole inhabitants of Earth for billions of years?

NotebookLM: A useful research assistant or more than that?

The launching of NotebookLM by Google is bound to be a serious milestone on the development of AI tools. Although many tools that can process personal information have been made available in the last months, NotebookLM stands alone in its ability to combine, summarise, organize and report the information contained in the sources that are made uploaded.

Image

NotebookLM, made available by Google on an experimental base, offers a simple interface to a very complex and very sophisticated system. The user can select and upload up to 50 documents, in various formats, and then interactively explore them. This exploration can take many forms: the user can request a summary, enter a dialogue with the system by asking questions and receiving answers, ask for a timeline of a table of contents, or obtain a briefing on specific subtopics covered by the sources. All these functionalities are well and good, and rather impressive at that.

But the killer app is, really, the ability to generate a podcast about the contents of the sources. This podcast, whose duration and contents can vary, will consist of a vivid dialogue that explores, in podcast style, the material that was uploaded. For an uninformed listener, such a podcast will be hard to distinguish from a real one, since the conversation is fluid and includes many characteristics of typical human interactions, such as pauses and interjections. True, it may still be a long way from a professionally generated podcast, but the result is still impressive. I tried it with a number of sources, including my 2019 book on Artificial Intelligence (in Portuguese). Even though the book is written in Portuguese, the resulting podcast is in English.

Image

You can listen to the result here and, if I am not wrong, be impressed by what NotebookLM can do.

Design a site like this with WordPress.com
Get started