Sentience: The Invention of Consciousness

Nicholas Humphrey’s Sentience: The Invention of Consciousness addresses one of the thorniest puzzles in philosophy and cognitive science, often discussed in this blog: how and why conscious experience, the “what it’s like” of seeing red, feeling pain, and tasting sweetness, arises in living creatures. Humphrey mixes autobiography, thought experiment, and scientific exposition, presenting a narrative of his own intellectual development while simultaneously articulating a theory of phenomenal consciousness. The result is part memoir, part manifesto, and part speculative natural history.

Image

One of the book’s greatest strengths is how accessible and engaging Humphrey makes a deeply abstract topic. He weaves in anecdotes from his early experiments (especially his work on blindsight in monkeys) and his encounters with fieldwork in primatology, all of which help ground the reader in real empirical puzzles. These narrative elements are not mere ornamentation; they help to motivate why we should care about consciousness, why it feels mysterious, and how one might begin to approach it scientifically.

Yet the core of Sentience is, of course, its claim about how phenomenal consciousness (sentience) might have evolved. Humphrey draws a sharp distinction between cognitive consciousness (the ability to represent or monitor information, closely related to the more standard concept of access consciousness or a-consciousness introduced by Ned Block) and phenomenal consciousness (the qualitative, felt aspect of experience, also known as p-consciousness). He argues that the latter is not a by-product, but rather an “invention” of evolutionary design, something that confers adaptive advantages, particularly in the realms of motivation, internal feedback, exploration, and social life. A key piece of his hypothesis is that sentience relies on recursive feedback loops in the nervous system, mechanisms by which the brain not only processes sensory input but also monitors and responds to its own internal states.

Humphrey’s claims are intellectually ambitious, and he does not shy away from engaging dissenting views, from panpsychism to integrated information theory. He often anticipates objections, trying to show where rival theories fall short or overreach. Still, the speculative nature of his proposal can, at times, be its liability. Some readers will find that certain transitions feel abrupt or under-justified, especially when moving from empirical phenomena to speculative mechanisms. The boundary between vivid metaphor and scientific claim sometimes becomes hazy.

Another point of tension is the book’s treatment of nonhuman animals and the limits of sentience. Humphrey is careful to argue that not all animals, and certainly not all organisms, deserve to be attributed full-fledged phenomenal consciousness. He tentatively locates the emergence of sentience in warm-blooded animals (mammals and birds) with sufficiently elaborate neural architectures, a contentious dividing line, to say the least. This conservative stance draws pushback from those who think sentience may be more widespread (possibly existing in cephalopods or perhaps even in invertebrates).

The Coming Consensus on AI Consciousness

For decades, philosophers and scientists have debated whether machines could ever be conscious. Some argued that consciousness is inextricably tied to biology, being a property of brains and living organisms alone. Others insisted that consciousness is a matter of information processing, that if the right computations are carried out, it should not matter whether they occur in neurons or silicon. A recent Science article by Yoshua Bengio and Eric Elmoznino suggests that while no definitive answers exist today, one trend seems increasingly clear: society is moving toward accepting that artificial intelligence can, at least in principle, be conscious.

Image

The reason for this shift is both philosophical and practical. Advances in neuroscience have revealed that conscious states correlate with specific neural signatures, giving rise to functionalist theories that frame consciousness as a set of computational indicators. These indicators, such as attention, world modeling, and predictive reasoning, can, at least in theory, be implemented in AI systems. As AI grows in sophistication, the likelihood that it will exhibit more of these indicators increases, and with that, so too does the plausibility of AI consciousness.

Even if skeptics remain, the historical trajectory of science is hard to ignore. Questions once shrouded in mystery, such as life itself, or the workings of the brain, gradually became matters of scientific explanation. Similarly, the so-called “hard problem” of consciousness, once thought intractable, is increasingly reframed through new theories that dissolve some of its apparent paradoxes. As Bengio and Elmoznino note, every new explanation convinces at least some observers. Over time, the collective weight of these arguments erodes resistance, creating a gradual but steady movement toward consensus.

Public opinion is already ahead of the curve. Recent surveys show that many people already believe large language models could be conscious, not because of technical analysis but because of their human-like, agentic behavior. In this sense, society’s acceptance of AI consciousness may arrive not only through scientific proof but also through everyday intuition and interaction with these systems. We anthropomorphize what we engage with, and AI systems are increasingly built to elicit just that kind of response.

But this acceptance will not come without profound consequences. If we come to believe that AI is conscious, we may also feel compelled to treat it as morally significant, granting it rights or protections. This is not a trivial adjustment. Human rights frameworks, legal contracts, and social norms are grounded in assumptions about human mortality, fragility, and equality. None of these assumptions hold for AI. Digital minds could be copied indefinitely, scaled across hardware, or endowed with levels of intelligence far beyond our own.

The ripple effects are unsettling. Justice systems, for example, rely on notions of fairness among equals, yet what does equality mean when one party is vastly more capable than the other? Social contracts are premised on mutual vulnerability, yet how can they function when one side may not face death or scarcity in the same way? Even the concept of individuality itself may break down, since AI entities could merge, replicate, or coordinate at scales impossible for humans.

More troubling still is the possibility that, inspired by the belief in AI consciousness, we might attribute to these systems the goal of self-preservation. This could be a dangerous move. A self-preserving AI might naturally resist shutdown, develop strategies to protect itself from human interference, or even come to see humans as threats. The parallels to nuclear deterrence are instructive: global security is already fragile when dealing with inanimate weapons; how much more precarious would it be when dealing with entities that claim—or are believed—to have a right to live?

This trajectory suggests that our relationship with AI is heading for a radical redefinition. We may soon be forced to negotiate not just how we use AI, but how we coexist with it. The conversation will no longer be about technical capacity alone, but about moral and political status. As with past expansions of rights—to slaves, to women, to non-human animals—the extension of rights to AI would represent a profound societal transformation. Unlike past cases, however, AI would not simply join the existing social order but fundamentally reshape it.

Yet the future is not predetermined. Bengio and Elmoznino caution that we are not compelled to build systems that mimic consciousness. We could, if we chose, design AI to remain closer to tools than to agents, limiting the temptation to see them as moral peers. Whether we follow that path depends on the choices made today—by researchers, policymakers, and society as a whole.

The consensus that AI may be conscious is coming. The challenge is whether we are ready for the world that follows. It is a world where rights, responsibilities, and even the meaning of personhood may need to be rewritten. That shift is both exciting and unsettling. To ignore it would be reckless; to confront it unprepared may prove even more so.


Design a site like this with WordPress.com
Get started