Generative Artificial Intelligence

The English translation of my book, published in 2025 by Fundação Francisco Manuel dos Santos, is now available for pre-order in Amazon. For those on the move, you can also hear an AI-generated podcast about the book.

Image

In this book, I aim to provide readers with a clear and grounded understanding of one of the most visible technologies of our time. The book was written with the conviction that the recent surge in interest in generative AI should not be viewed as the result of a sudden or mysterious breakthrough, but rather as the culmination of a long and cumulative scientific trajectory. By situating today’s systems within the broader history of artificial intelligence and machine learning, I try to help readers make sense of why these technologies now appear so powerful, and why they have entered everyday life so rapidly.

My goal was not to write a technical manual or a cookbook for Ai users, but rather to explain, in an accessible way, how generative models work, what distinguishes them from earlier approaches, and what kinds of problems they are well-suited to address. Concepts such as neural networks, deep learning, language models, reinforcement learning, and diffusion-based generation are introduced gradually, always with the intention of clarifying ideas rather than overwhelming the reader with detail. I hope this makes the book useful to anyone who wants to understand the technologies behind current applications, from text and image generation to decision support and automation, without needing a background in computer science.

At the same time, the book reflects my concern that enthusiasm for generative AI should be matched by careful reflection on its limitations, risks, and broader consequences. Alongside the discussion of applications, I devote space to the challenges these systems pose for individuals, institutions, and society as a whole, and to the difficulty of anticipating their long-term impact. Rather than offering definitive answers, I aim to provide readers with the conceptual tools needed to think critically about these technologies, to use them more responsibly, and to engage in informed debate about the future they are helping to shape.

This (AI generated) infographic provides a good overview of what is covered in the book.

Image

Illusionism as a theory of consciousness

Frankish’s Illusionism as a Theory of Consciousness is a deliberately provocative collection organised around a single target article by Keith Frankish, followed by a wide range of critical responses and a substantial reply. The volume grew out of a special issue of the Journal of Consciousness Studies and adopts its characteristic format: a bold thesis, public scrutiny, and careful rejoinder. The result is not just an exposition of illusionism, but a snapshot of a live and unsettled debate at the heart of contemporary philosophy of mind.

Image

In the central article, Frankish presents illusionism as a serious theoretical position rather than a merely sceptical gesture. Illusionism does not deny that we have mental states, experiences, or rich inner lives, nor that these play an essential causal role in cognition and behaviour. What it denies is that experiences possess the special qualitative properties, phenomenal properties or qualia, that common sense and much philosophy take to be obvious and irreducible features of reality. According to Frankish, our conviction that experiences have such properties is itself a cognitive illusion.

A key framing move in the article is Frankish’s claim that illusionism should be understood as an alternative to two dominant realist positions about consciousness: “radical realism” and “conservative realism.” Radical realism treats phenomenal properties as fundamental, non-reducible features of the world, often resisting any form of physical or functional explanation. Conservative realism, by contrast, seeks to preserve the reality of phenomenal properties while explaining them in naturalistic terms, typically by identifying them with physical or functional states. Frankish argues that both positions take for granted that phenomenal properties exist, and that this shared assumption is precisely what should be questioned.

Illusionism breaks with both forms of realism by rejecting the existence of phenomenal properties altogether. On this view, there is nothing in the world that corresponds to the supposed ineffable, intrinsic “what-it-is-like” qualities of experience. Instead, what exists are complex information-processing systems that represent their own internal states in a distinctive way. These representations generate the strong impression that we are aware of phenomenal properties, even though no such properties are present.

Central to Frankish’s argument is a distinction between introspective access and introspective interpretation. We do have access to aspects of our mental lives, but we systematically misinterpret what we access. Introspection delivers data about perceptual, affective, and cognitive processes, but our minds interpret this data using a flawed internal theory—one that posits phenomenal properties. The illusion lies not in experience itself, but in the conceptual framework we use to make sense of it.

Frankish also confronts the worry that illusionism is incoherent or self-defeating. How, critics ask, can there be an “illusion” without phenomenal experience? Frankish responds by stressing that illusions need not be phenomenal illusions; they can be cognitive or theoretical errors. Just as we can be mistaken about the nature of space or time, we can be mistaken about the nature of our own minds. The appearance of phenomenal properties can be fully explained in terms of representational and functional mechanisms.

Another important motivation for illusionism is methodological. Frankish argues that both radical and conservative realism inherit the so-called “hard problem” of consciousness, because both accept phenomenal properties as part of what needs explaining. Illusionism dissolves the problem rather than solving it, by denying that phenomenal properties belong in our ontology at all. What remains to be explained is why we believe in them. That, Frankish claims, is a tractable scientific and philosophical question.

The responses collected in the volume challenge illusionism from many directions. Some contributors argue that Frankish underestimates the authority of first-person knowledge; others claim that illusionism cannot capture what people actually mean by “experience”; others still suggest that it collapses into a disguised form of realism. Together, these critiques illuminate the deep intuitions that illusionism must overcome, and clarify what is at stake in rejecting phenomenal properties.

Frankish’s final reply is an essential part of the book. Here he sharpens the distinction between illusionism and realist views, clarifies misunderstandings, and reinforces his claim that illusionism is not eliminativist about the mind, but revisionary about our concepts. The aim is not to deny consciousness in any ordinary sense, but to revise our theoretical understanding of what consciousness involves.

What makes Illusionism as a Theory of Consciousness especially interesting is that it forces a reconfiguration of the debate. By positioning illusionism as a genuine third option, as an alternative to either radical realism or conservative realism, Frankish challenges readers to reconsider assumptions that often go unquestioned. Whether one ultimately accepts or rejects illusionism, the book succeeds in showing that the space of possible theories of consciousness is broader, and more conceptually demanding, than it first appears.

The Hidden Spring

The Hidden Spring, by Mark Solms, offers an interpretation of consciousness by locating its source not in the cortex, where cognitive neuroscience usually places it, but in the affective and homeostatic mechanisms of the brainstem. This is a view shared by many other scientists, including António Damásio, Peter Hacker, and Jaak Panksepp.

Image

For Solms, what makes a mental state conscious is that it feels like something, and feeling is inseparable from the organism’s basic biological need to regulate itself and remain alive. He argues that consciousness is essentially the subjective experience of deviations from homeostasis, an internal barometer of how well or poorly the organism is doing.

The book is structured around clinical cases that challenge the standard view that cortical processing generates conscious experience. Solms discusses children with severe cortical loss, patients with brainstem lesions, and various disorders of awareness to show that affective arousal, rather than sophisticated cognition, seems to be the indispensable component of consciousness. Perception, memory, and reasoning can all occur unconsciously, he argues, but feelings cannot. This leads him to claim that the origins of consciousness lie in ancient neural architectures whose primary function is emotional and motivational.

An interesting part of the book is Solms’s integration of this affect-centred view with Karl Friston’s Free Energy Principle. He proposes that organisms survive by reducing prediction error and maintaining physiological equilibrium, and that affective feelings are the subjective representation of this process. Consciousness, in this framework, emerges because feelings guide behavior toward states that are compatible with long-term viability. The mind becomes an internal monitoring system whose qualitative tones, such as pleasure, anxiety, frustration, and relief, signal how the organism is faring.

While the book is persuasive in its use of clinical evidence and offers a refreshing alternative to cortico-centric theories, some of its claims remain controversial. Critics argue that Solms sometimes overstates what certain neurological cases actually demonstrate. The behavioural signs he interprets as evidence of consciousness may have other explanations, and the complete displacement of cortical contributions is not universally accepted. Moreover, although Solms simplifies the Free Energy Principle, the connection between its mathematical formalism and subjective affect remains only partially clarified.

Overall, The Hidden Spring stands as a provocative contribution to the science and philosophy of consciousness. It shifts the emphasis from thinking to feeling, from cortical representation to biological regulation, and from cognitive abstraction to embodied affect. Even if Solms’s conclusions are not universally persuasive, his reframing forces a reevaluation of deep assumptions about where consciousness arises and what it is for. For anyone interested in the intersection of neurobiology, subjective experience, and the foundations of mind, his book offers a compelling and challenging perspective.

Sentience: The Invention of Consciousness

Nicholas Humphrey’s Sentience: The Invention of Consciousness addresses one of the thorniest puzzles in philosophy and cognitive science, often discussed in this blog: how and why conscious experience, the “what it’s like” of seeing red, feeling pain, and tasting sweetness, arises in living creatures. Humphrey mixes autobiography, thought experiment, and scientific exposition, presenting a narrative of his own intellectual development while simultaneously articulating a theory of phenomenal consciousness. The result is part memoir, part manifesto, and part speculative natural history.

Image

One of the book’s greatest strengths is how accessible and engaging Humphrey makes a deeply abstract topic. He weaves in anecdotes from his early experiments (especially his work on blindsight in monkeys) and his encounters with fieldwork in primatology, all of which help ground the reader in real empirical puzzles. These narrative elements are not mere ornamentation; they help to motivate why we should care about consciousness, why it feels mysterious, and how one might begin to approach it scientifically.

Yet the core of Sentience is, of course, its claim about how phenomenal consciousness (sentience) might have evolved. Humphrey draws a sharp distinction between cognitive consciousness (the ability to represent or monitor information, closely related to the more standard concept of access consciousness or a-consciousness introduced by Ned Block) and phenomenal consciousness (the qualitative, felt aspect of experience, also known as p-consciousness). He argues that the latter is not a by-product, but rather an “invention” of evolutionary design, something that confers adaptive advantages, particularly in the realms of motivation, internal feedback, exploration, and social life. A key piece of his hypothesis is that sentience relies on recursive feedback loops in the nervous system, mechanisms by which the brain not only processes sensory input but also monitors and responds to its own internal states.

Humphrey’s claims are intellectually ambitious, and he does not shy away from engaging dissenting views, from panpsychism to integrated information theory. He often anticipates objections, trying to show where rival theories fall short or overreach. Still, the speculative nature of his proposal can, at times, be its liability. Some readers will find that certain transitions feel abrupt or under-justified, especially when moving from empirical phenomena to speculative mechanisms. The boundary between vivid metaphor and scientific claim sometimes becomes hazy.

Another point of tension is the book’s treatment of nonhuman animals and the limits of sentience. Humphrey is careful to argue that not all animals, and certainly not all organisms, deserve to be attributed full-fledged phenomenal consciousness. He tentatively locates the emergence of sentience in warm-blooded animals (mammals and birds) with sufficiently elaborate neural architectures, a contentious dividing line, to say the least. This conservative stance draws pushback from those who think sentience may be more widespread (possibly existing in cephalopods or perhaps even in invertebrates).

The Coming Consensus on AI Consciousness

For decades, philosophers and scientists have debated whether machines could ever be conscious. Some argued that consciousness is inextricably tied to biology, being a property of brains and living organisms alone. Others insisted that consciousness is a matter of information processing, that if the right computations are carried out, it should not matter whether they occur in neurons or silicon. A recent Science article by Yoshua Bengio and Eric Elmoznino suggests that while no definitive answers exist today, one trend seems increasingly clear: society is moving toward accepting that artificial intelligence can, at least in principle, be conscious.

Image

The reason for this shift is both philosophical and practical. Advances in neuroscience have revealed that conscious states correlate with specific neural signatures, giving rise to functionalist theories that frame consciousness as a set of computational indicators. These indicators, such as attention, world modeling, and predictive reasoning, can, at least in theory, be implemented in AI systems. As AI grows in sophistication, the likelihood that it will exhibit more of these indicators increases, and with that, so too does the plausibility of AI consciousness.

Even if skeptics remain, the historical trajectory of science is hard to ignore. Questions once shrouded in mystery, such as life itself, or the workings of the brain, gradually became matters of scientific explanation. Similarly, the so-called “hard problem” of consciousness, once thought intractable, is increasingly reframed through new theories that dissolve some of its apparent paradoxes. As Bengio and Elmoznino note, every new explanation convinces at least some observers. Over time, the collective weight of these arguments erodes resistance, creating a gradual but steady movement toward consensus.

Public opinion is already ahead of the curve. Recent surveys show that many people already believe large language models could be conscious, not because of technical analysis but because of their human-like, agentic behavior. In this sense, society’s acceptance of AI consciousness may arrive not only through scientific proof but also through everyday intuition and interaction with these systems. We anthropomorphize what we engage with, and AI systems are increasingly built to elicit just that kind of response.

But this acceptance will not come without profound consequences. If we come to believe that AI is conscious, we may also feel compelled to treat it as morally significant, granting it rights or protections. This is not a trivial adjustment. Human rights frameworks, legal contracts, and social norms are grounded in assumptions about human mortality, fragility, and equality. None of these assumptions hold for AI. Digital minds could be copied indefinitely, scaled across hardware, or endowed with levels of intelligence far beyond our own.

The ripple effects are unsettling. Justice systems, for example, rely on notions of fairness among equals, yet what does equality mean when one party is vastly more capable than the other? Social contracts are premised on mutual vulnerability, yet how can they function when one side may not face death or scarcity in the same way? Even the concept of individuality itself may break down, since AI entities could merge, replicate, or coordinate at scales impossible for humans.

More troubling still is the possibility that, inspired by the belief in AI consciousness, we might attribute to these systems the goal of self-preservation. This could be a dangerous move. A self-preserving AI might naturally resist shutdown, develop strategies to protect itself from human interference, or even come to see humans as threats. The parallels to nuclear deterrence are instructive: global security is already fragile when dealing with inanimate weapons; how much more precarious would it be when dealing with entities that claim—or are believed—to have a right to live?

This trajectory suggests that our relationship with AI is heading for a radical redefinition. We may soon be forced to negotiate not just how we use AI, but how we coexist with it. The conversation will no longer be about technical capacity alone, but about moral and political status. As with past expansions of rights—to slaves, to women, to non-human animals—the extension of rights to AI would represent a profound societal transformation. Unlike past cases, however, AI would not simply join the existing social order but fundamentally reshape it.

Yet the future is not predetermined. Bengio and Elmoznino caution that we are not compelled to build systems that mimic consciousness. We could, if we chose, design AI to remain closer to tools than to agents, limiting the temptation to see them as moral peers. Whether we follow that path depends on the choices made today—by researchers, policymakers, and society as a whole.

The consensus that AI may be conscious is coming. The challenge is whether we are ready for the world that follows. It is a world where rights, responsibilities, and even the meaning of personhood may need to be rewritten. That shift is both exciting and unsettling. To ignore it would be reckless; to confront it unprepared may prove even more so.


A Psalm for the Wild-Built


In A Psalm for the Wild-Built, Becky Chambers offers a gentle yet profound meditation on purpose, identity, and the dignity of simply being. Set in a future where humanity has learned to coexist with the world rather than dominate it, the novella explores a post-industrial society that has not only survived its crises but emerged with a newfound respect for balance and ecology.

Image

At its heart lies a deeply counterintuitive proposition: that robots, those traditional emblems of cold logic and mechanical behavior, may not only achieve consciousness one day, but also develop a reverence for nature and an emotional richness that is as compelling as those of any human. Chambers does not dwell on the mechanics of this awakening. Instead, she invites the reader to accept it, to sit with it, and to consider what it means when a being made of circuits and steel can marvel at the beauty of a forest or question the meaning of existence.

The tone is contemplative, the worldbuilding quietly radical, and the pacing deliberate, like a walk through an unfamiliar landscape where the goal is not to reach a destination, but to observe, to wonder, and to listen. This is not a story of revolution or catastrophe, but a story of conversation, of empathy across unlikely divides, and of how we redefine ourselves in a world that no longer demands we prove our utility.

Chambers has written a hopeful vision of the future, a vision that is neither utopian nor naive, but instead rooted in care, reflection, and the deeply human need to be understood, even by those who are not human at all.

The World Behind the World: Consciousness, Free Will, and the Limits of Science

“The World Behind the World: Consciousness, Free Will, and the Limits of Science”, a book by Erik Hoel, explores the mysteries of consciousness through the lens of a neuroscientist and a researcher of cognitive science. The main focus of the book is presentation of the dual and complementary perspectives of reality, the “extrinsic” perspective (the mechanistic, objective view of the world based on science, in general, and on physics, in particular) and the “intrinsic” perspective (the subjective realm of consciousness, thoughts, feelings, and individual subjective experiences). Hoel investigates how these two seemingly disparate views can be reconciled.

Image

A significant part of the book is focused on the ongoing scientific endeavor to understand how the brain generates conscious experience and whether such experiences can be generated by other mechanisms. Hoel examines current theories and the ongoing debates in the field, questioning and trying to answer how physical processes in the brain give rise to subjective phenomena. Hoel also addresses related phenomena, including the nature of free will. He makes an argument for the existence of free will, analyzing how our understanding of consciousness impacts this fundamental philosophical question and challenging deterministic views of human action.

One of the more interesting arguments in the book is the argument for the incompleteness of modern science in what regards the subjective view. He explores the idea that science, despite its many advancements and victories, may be inherently limited in its capacity to fully explain consciousness and other subjective experiences, drawing comparisons to mathematical incompleteness theorems. Hoel’s position is, in a way, close to that of the mysterianists, in that he argues that existing scientific knowledge cannot be used to explain subjective phenomena.

Hoel also touches on the complex question of whether artificial intelligence can ever achieve true consciousness. Here, Hoel draws on his experiences with Integrated Information Theory, a theory that he initially supported but that later came to regard as a poor explanation for subjective phenomena. The book ultimately argues that establishing a proven theory of consciousness would profoundly impact neuroscience and the future of technology, transforming society.

The Experience Machine: How Our Minds Predict and Shape Reality

The Brain as a Predictive Machine

The Experience Machine, by Andy Clark, challenges the common belief that our minds passively take in sensory information from the world to construct our perception of reality. Instead, Clark proposes that the brain is constantly generating predictions about what it expects to encounter, based on prior knowledge, experiences, and internal models, and that sensory input primarily serves to refine these predictions. Discrepancies between predictions and actual sensory data generate prediction errors, which the brain uses to update and improve its internal models.

Image

Perception is, therefore, a controlled hallucination. Our perception of reality is not a direct reflection of the external world, but rather a controlled hallucination, a model actively constructed by our brains based on these ongoing predictions. What we perceive is always heavily influenced by our beliefs, expectations, and even emotional states. This explains phenomena like illusions and how familiar sounds can seem clearer even in noisy environments.

The book has significant implications for mental health and well-being, as well as for researchers interested in artificial intelligence and related fields. The predictive processing framework, proposed in the book, provides a guiding ligh for AI researchers working in vision, language, and robotics and offers new insights into various mental health conditions, such as chronic pain, anxiety, PTSD, and even psychosis. These can be understood as instances where the brain’s predictive models become maladaptive or misdirected. Understanding the predictive nature of the mind suggests new approaches to treatment and/or mitigation, focusing on “hacking” these cognitive compulsions and helping individuals to correct aberrant predictions through techniques like cognitive reframing.

Ultimately, Clark argues for a profoundly integrated view of human experience, where our minds emerge from a continuous and dynamic interplay between the brain, the body, and the environment. The author also suggests that the material, digital, and social worlds we build also play a significant role in shaping our own minds, as our brains constantly adapt and learn from these interactions. Altogether, a very enjoyable and educational book.

Boomers vs. Doomers: Diverging Narratives at the AI Crossroads – Reflections from the Paris AI Summit

At the recent AI Summit in Paris, the tension between competing narratives of AI’s future—what some call the “boomer” optimism and the “doomer” alarmism—was on full display. This post will try to unpack the ideological divide as it emerged across key sessions, focusing on the satellite meetings “AI Science and Society” and “The Inaugural Conference of the International Association for Safe and Ethical AI”. While the former highlighted AI’s potential to advance human knowledge, social good, and collective governance, the latter stressed existential risks, regulatory urgency, and global coordination frameworks. By comparing these perspectives, this post explores how different epistemic communities frame the stakes of AI development, and what this means for research, policy, and public engagement moving forward.

Image

The AI Science and Society meeting was mostly focused on the positive aspects of AI technologies and how these technologies could be used for the benefit of humanity.

Image

The conference chair was Michael Jordan (h=214, c=325k), the Pehong Chen Distinguished Professor in the Department of Electrical Engineering and Computer Science and the Department of Statistics of UC Berkeley.

Image

The program included many invited talks from distinguished researchers:

Image

In his inauguration address, Michael Jordan disparaged the existing hype about AGI, human-level intelligence in AI systems and related issues, arguing for the development of down-to-earth business models that use AI technology in new ways.

Among the many participants, Yan LeCun (h=156,c=408k), one of the most well-known boomers, argued that we do in fact need systems with human-level intelligence:

In parallel (partially), another meeting took place, The Inaugural Conference of the International Association for Safe and Ethical AI, chaired by Stuart Russell, Distinguished Computer Science Professor also from UC Berkeley.

Image

The meeting counted with contributions from many well-known researchers, including Turing Award and Nobel Prize winners:

Image

In this meeting, the tone was completely different: AGI was not only a genuine possibility in the short-term future but a real risk to the future of humanity.

Yoshua Bengio (h=248,c=936k) argued, forcefully, for a future without AI Agents, since having them would be too risky:

Image
(up to minute 0:30:22)

Geoffrey Hinton (h=189, c=920k) gave a rather balanced talk on what “Understading” means, making the argument that LLMs represent knowledge in much the same way we do:

Image
(up to minute 1:36:08)

In the closing talk by the conference chair, Stuart Russel (h=97, c=130k), the tone was definitely dark, with an almost unique focus on safety and on the risks of AI for the future of humanity:

Image
(up to minute 3:40:52)

Later in the talk Russell argues that LLMs were never intended as general models of AI:

Image
(up to minute 3:52:05)

and supported very strict rules on which systems should be allowed to run on advanced hardware:

Image
(up to minute 4:19:32)

In summary, two very different views on the future of AI were present in Paris, both from very well-informed and influential researchers who have, essentially, the same base information. One is left with the feeling that the crystal ball is, indeed, very cloudy.

A Brief History of Intelligence

What Max Bennett sets out to achieve with “A Brief History of Intelligence” is undoubtedly ambitious, as he aims to put into a unifying perspective the key innovations that led from the origin of life on Earth to us. And yet, the aim is mainly achieved, as this is not just another book on the history of evolution, but a book on the algorithmic improvements that led to the intelligence of modern man. In this fast-paced book, Bennett tries to focus on the breakthroughs that led from the first animals endowed with neurons, more than six hundred million years ago, to modern humans and even to our descendants.

Image

Armed with these four tricks, our ancestors were ready for the last and more radical breakthrough: the invention and domain of complex language. Although we do not know exactly when language appeared, it was probably common to the four species of hominids that still existed 100 thousand years ago: Homo sapiens, Homo neanderthalensis, Homo floresiensis and Homo erectus. While many questions remain about how and why language abilities evolved in our ancestors, there is little doubt that it was this breakthrough that led us from the African savannah to the skyscrapers of Tokyo and New York.

In a necessarily schematic but still useful approach, Bennett identifies five key breakthroughs that led from primitive animals swimming in the Ediacaran seas to humans. Six hundred million years ago bilaterians discovered that neurons can be used for steering, enabling them to move purposefully instead of just waiting for food to come by. Five hundred years ago vertebrates “discovered” reinforcement learning, the ability to find sequences of behaviors that lead to interesting outcomes, in general, and to food, in particular. Hundreds of millions of years passed until about 100 million years ago some small mammal evolved the first neocortex and was able to simulate the consequences of actions, instead of having to perform them. This ability eventually led, 85 million years later, to the capacity of our cousins, the primates, to “simulate” their own minds and to create a theory of the minds of others, pressured by the development of more complex societies and more complex environments where they lived. Granted, this is a schematic and simplified account of what happened in these 600 million years but, still, Bennett presents convincing arguments that these innovations were key in the development of advances intelligences.

Bennetts book does not end with the invention of language, as he puts these hundreds of years of evolution in perspective and imagines what the future may bring. Today, we have artificial models of language, like GPT-3 and GPT-4, but these are only pale versions of future models that will incorporate not only language abilities comparable to ours but also other types of intelligent behavior. The final part of the book, although a rather short one, is dedicated to the future. As he argues, we have tens of billions of years, at least, ahead of us, to create our descendants, the inheritors of human intelligence, a trait that took “only” four billion years to create. What will be future of intelligence, now that we are seeking to reproduce it in systems that are so different from our own brains as we are from the single-celled organisms that were the sole inhabitants of Earth for billions of years?

Design a site like this with WordPress.com
Get started