Why AI Writing Converges: The Adjacent Possible of Language Models

Scatter plot from the “Artificial Hivemind” paper showing responses from many different language models to the prompt “Write a metaphor involving time.” Each dot represents a generated answer projected into semantic space. Instead of spreading widely, the responses cluster tightly together, showing that different models tend to produce very similar metaphors such as “time is a river” or “time is a weaver.”

AI Writing Convergence: the Adjacent Possible

Large language models are often marketed as creative systems capable of generating limitless ideas.

Recent research suggests… maybe not1.

Instead of exploring the full space of possible ideas, many models converge toward the same answers. Even when randomness is increased, different models frequently produce nearly identical metaphors, explanations, and narratives.

Anyone who has interacted with LLMs has likely seen this anecdotally. It’s easy enough to do.

A recent paper titled Artificial Hivemind: The Open-Ended Homogeneity of Language Models (and Beyond) demonstrates this effect across dozens of models and thousands of prompts.

The result looks less like independent intelligence and more like a statistical consensus machine.

Understanding why this happens becomes easier when viewed through a concept from complexity science known as the adjacent possible.

Continue reading

Ethical Sovereignty and Artificial Intelligence

UNESCO infographic illustrating four core values for AI ethics: respect for human rights and dignity, peaceful and interconnected societies, diversity and inclusiveness, and environmental sustainability.

When AI ethics meets infrastructure and incentives

Recently I revisited the UNESCO Recommendation on the Ethics of Artificial Intelligence.

It is an ambitious document. Adopted by 193 member states in 2021, it attempts to establish a global ethical framework for artificial intelligence.

The goals are admirable.

AI systems should respect human rights.
They should avoid discrimination.
They should be transparent, accountable, and subject to human oversight.

These principles are difficult to argue with.

But reading the document reminded me of something I learned many years ago.

A memory from the early internet

I was involved in the World Summit on the Information Society process in 2003 and 2005.

The summit was intended to be a global conversation about the future of the internet and the emerging information society.

Governments were expected to shape the policy landscape.

But in the working rooms where many of the real discussions took place, government representatives from the United States were largely absent.

Instead, the seats were filled mostly by corporate representatives from companies whose names you can probably guess.

It was an early lesson.

Technological systems tend to move forward with whoever is present in the room.

Governance frameworks often arrive later.

Continue reading

RAM Is the New Rent: Eyeing the M5 for Local AI

A white background, with a cute llama drawn in black waving vigorously.

In the race to build centralized AI, RAM prices and systems are becoming less powerful than I would like to see1.

The Apple Neo is a brilliant example of the effect on the market.

A cheap Apple. Who knew it could happen?

The system, however, is not one I would throw under Ollama.

In the race to build centralized AI, RAM prices and systems are… less powerful than I would like to see. The Apple Neo is a brilliant example of the effect on the market:

A cheap Apple. Who knew it could happen?

The system, however, is not one I would throw under Ollama.

The Apple M5 looks attractive, though I don’t need to upgrade. Yet.

Continue reading

The Necessary Friction of Leadership

Historic engraving of a sailing ship carefully navigating through Arctic ice floes, symbolizing leadership navigating uncertainty and resistance.

Artificial intelligence companies promise many things, including efficiency.

Much of the conversation around AI focuses on how quickly it can summarize information, generate text, produce images, or assist with programming. The emphasis is almost always on speed and convenience. AI removes friction from many tasks that previously required time, effort, and concentration.

That convenience is real.
Businesses do want speed.
Businesses want productivity.
AI is marketed toward that.

But leadership has never been primarily about efficiency. Leadership is about maintaining coherence when information is incomplete, incentives are misaligned, and decisions carry consequences beyond the immediate moment.

Continue reading

Cognitive Friction

Sketchbook and pen on a wooden table overlooking a hillside landscape, with two bonsai trees on a tray and hills in the background.

In psychology, delusion and cognitive dissonance are very different things.
I’m not a mental health professional, but the definitions are easy enough to understand.

Delusion is generally defined as a fixed belief that does not change even when confronted with conflicting evidence.

Cognitive dissonance is something else entirely.

It is the uncomfortable tension that arises when our beliefs conflict with reality, new information, or our own behavior.

That tension matters.

Continue reading

Executive DIDO?

That creates at least the potential for DIDO — Delusion In, Delusion Out. If executives rely heavily on systems that are themselves trained on existing narratives and patterns, any mistaken assumptions entering that loop can easily become reinforced. And the consequences are not limited to the executives themselves. Organizations inherit the thinking patterns of the people leading them. If those leaders begin outsourcing their reasoning to systems that may amplify assumptions rather than challenge them, entire companies may end up inheriting those distortions. Executives are often accused of delusion when they’re out of earshot. Adding automated reinforcement loops to that environment may not improve the situation.

This relates to Dogfooding and DIDO.

A recent study conducted by market research agency 3Gem and flagged by The Register found that business leaders in the United Kingdom seem to be outsourcing a huge amount of their cognitive and emotional labor to their AI chatbots.

The study, which surveyed 200 various owners, founders, CEOs, and other titans of industry, found that 62 percent of the respondents are using AI to make “most decisions.” A whopping 140 of the moguls reported second-guessing their own ideas when they conflicted with AI’s recommendations, while 46 percent said they now rely on advice from AI more than that of their own business colleagues.

Study Finds That Execs Are Outsourcing Their Thinking to AI”, Joe Wilkins, Futurism.com, March 8th 2026
That creates at least the potential for DIDO — Delusion In, Delusion Out.If executives rely heavily on systems that are themselves trained on existing narratives and patterns, any mistaken assumptions entering that loop can easily become reinforced.And the consequences are not limited to the executives themselves.Organizations inherit the thinking patterns of the people leading them.If those leaders begin outsourcing their reasoning to systems that may amplify assumptions rather than challenge them, entire companies may end up inheriting those distortions.Executives are often accused of delusion when they’re out of earshot.Adding automated reinforcement loops to that environment may not improve the situation.
Continue reading

When Surveillance Isn’t Called Surveillance

a photo of a CCTV camera, pointed down and away from the viewer, with the text "Surveillance used to look like this" above the camera. KnowProSE.com at bottom right in grey.

When most people hear the word surveillance, they imagine something specific.

Government agencies monitoring citizens. Intelligence services intercepting communications. Investigators following suspects.

The traditional model looks like this:

government -> monitoring -> citizens

In that framework, surveillance is something done by the state, usually under some legal authority and within a defined investigative context.

But that model increasingly describes the past more than the present.

Modern surveillance systems rarely look like surveillance.

Continue reading

Delusion In, Delusion Out (DIDO)

Diagram showing a feedback loop labeled Human Belief → Prompt → LLM Output → Reinforced Belief, illustrating the Delusion In, Delusion Out (DIDO) concept. This exists over a child looking into a funhouse mirror with altered reflection, while open mirrors exist to the side.

I have found myself using ‘Delusion In, Delusion Out’ with people and in writing, so I decided to formalize the concept.

Most people familiar with computing have heard the old principle:

Garbage In, Garbage Out (GIGO).

The idea is simple. If the input data to a system is flawed, the output will be flawed as well. Computers do not magically correct bad inputs; they faithfully process them.

Large language models introduce a related but subtly different failure mode.

I refer to it as Delusion In, Delusion Out (DIDO).

Where GIGO describes a data quality problem, DIDO describes a cognitive interaction problem.

Continue reading