The Uncomfortable Truth: Why Developer Training Is a Waste of Time

There’s an entire industry built around “improving” software developers. Conferences, workshops, bootcamps, online courses, books, certifications—billions of dollars spent annually on the promise that if we just train developers better, we’ll get better software. It’s time to say what many of us have privately suspected: it’s all just theater.

Here’s why investing in developer training is increasingly pointless, and why organisations would be better served directing those resources elsewhere:

  1. Nobody’s actually interested in improvement
  2. Developers don’t control what actually matters
  3. GenAI has fundamentally changed the equation

Let’s examine each of these uncomfortable truths.

1. Nobody’s Actually Interested in Improvement

Walk into any development team and ask who wants to improve their craft. Hands will shoot up enthusiastically. Now watch what happens over the next six months. The conference budget goes unused. The book club fizzles after two meetings. The internal tech talks attract the same three people every time. The expensive training portal shows a login rate of less than 15%. Personal note: I have seen this myself time and again in client organisations.

The uncomfortable reality is that most developers have found their comfort zone and have little to no genuine interest in moving beyond it. They’ve learned enough to be productive in their current role, and that’s sufficient. The annual performance review might require them to list “professional development goals” but these are box-checking exercises, not genuine aspirations. When developers do seek training, it’s often credential-seeking behavior—resume-building for the next job search, a.k.a. mortgage-driven development, not actual skill development for their current role.

This isn’t unique to software development. In most professions, once practitioners reach competence, the motivation for continued improvement evaporates. The difference is that in software, we’ve created an elaborate fiction that continuous learning is happening when it definitely isn’t. The developers who genuinely seek improvement are self-motivated outliers who would pursue it regardless of organisational investment. They don’t need your training programs; they’re already reading papers, experimenting with new technologies, and pushing boundaries on their own time.

2. Developers Have No Control Over What Actually Matters

Even if a developer emerges from training enlightened about better practices, they return to an environment that makes applying those practices simply impossible. They’ve learned about continuous deployment, but the organisation requires a three-week approval process for production releases. They’ve studied domain-driven design, but the database schema was locked in five years ago by an architecture committee. They’ve embraced test-driven development, but deadlines leave no time for writing tests, and technical debt is an accepted way of life.

The factors that most impact software quality—architecture decisions, technology choices, team structure, deadline pressures, hiring practices, organisational culture, the social dyname—are entirely outside individual developers’ control. These are set by management, architecture boards, or historical accident. Having developers trained in excellent practices but embedded in a dysfunctional system is like teaching someone Olympic swimming techniques and then asking them to compete while chained to a cinder block. (See also: Deming’s Red Bead experiment).

Moreover, the incentive structures in organisations reward maximising bosses’ well being, not e.g. writing maintainable code. Developers quickly learn that the skills that matter for career advancement are political navigation, project visibility, stakeholder management and sucking up—not technical excellence. Training developers in better coding practices while maintaining perverse incentives is simply theater that lets organisations feel good about the charade of “investing in people” while changing absolutely nothing that matters.

3. GenAI Has Fundamentally Changed the Equation

The emergence of generative AI has rendered much of traditional developer training obsolete before it’s even delivered. When Claude or GPT can generate boilerplate code, explain complex algorithms, refactor legacy systems, and even architect solutions, what exactly are we training developers to do? (Maybe AI has a more productive role to play in helping developers maximise their bosses’ well being).

The skills we’ve traditionally taught—memorising syntax, understanding framework details, knowing design patterns, debugging techniques—are precisely the skills that AI handles increasingly well. We’re training developers for skills that are being automated even as we conduct the training. The half-life of technical knowledge has always been short in software, but AI has accelerated this to the point of absurdity. By the time a developer completes a course on a particular framework or methodology, AI tools have already internalized that knowledge and can apply it faster and more consistently than any human (usual AI caveats apply).

The argument that developers need to “understand the fundamentals” to effectively use AI is wishful thinking from an industry trying to justify its existence. Junior developers are already shipping production code by describing requirements to AI and validating outputs. The bottleneck isn’t their understanding—it’s organisational factors like the social dynamic, relationships, requirements clarity and system architecture. Training developers in minutiae that AI handles better is like training mathematicians to use slide rules in the calculator age.

The Hard Truth

The developer training industry persists not because it works, but because it serves organisational needs that have nothing to do with actual improvement. It provides HR with checkboxes for professional development requirements. It gives managers a feel-good initiative to tout in interviews and quarterly reviews. It offers developers a sanctioned way to take a break from the grind. Everyone benefits except the balance sheet.

If organisations genuinely wanted better software, they’d stop pouring money into training programs and start fixing the systems that prevent good work: rigid processes, unrealistic deadlines, toxic relationships, flawed shared assumptions and beliefs, and misaligned incentives. They’d hire fewer developers at higher salaries, giving them the time and autonomy to do quality work. They’d measure success by folks’ needs met rather than velocity and feature count. But that would require admitting that the problem isn’t the developers—it’s everything else. And that’s a far more uncomfortable conversation than simply booking another training workshop.

The Comfortable Lie: Why We Don’t Actually Learn From Our Mistakes

We love a good comeback story. The entrepreneur who failed three times before striking it rich. The developer who learnt from a catastrophic production incident and never made ‘that mistake’ again. We tell these stories because they’re comforting—they suggest that failure has a purpose, that our pain is an investment in wisdom.

But what if this narrative is mostly fiction? What if, in the contexts where we most desperately want to learn from our mistakes—complex, adaptive systems like software development—it’s not just difficult to learn from failure, but actually impossible in any meaningful way?

The Illusion of Causality

Consider a typical software development post-mortem. A service went down at 2 AM. After hours of investigation, the team identifies the culprit: an innocuous configuration change made three days earlier, combined with a gradual memory leak, triggered by an unusual traffic pattern, exacerbated by a caching strategy that seemed fine in testing. The conclusion? ‘We learnt that we need better monitoring for memory issues and more rigorous review of configuration changes.’

But did they really learn anything useful?

The problem is that this wasn’t a simple cause-and-effect situation. It was the intersection of dozens of factors, most of which were present for months or years without issue. The memory leak existed in production for six months. The caching strategy had been in place for two years. The configuration change was reviewed by three senior engineers. None of these factors alone caused the outage—it required their precise combination at that specific moment.

In complex adaptive systems, causality is not linear. There’s no single mistake to point to, no clear lesson to extract. The system is a web of interacting components where small changes can have outsized effects, where the same action can produce wildly different outcomes depending on context, and where the context itself is always shifting.

The Context Problem

Here’s what makes this especially insidious: even if we could perfectly understand what went wrong, that understanding is locked to a specific moment in time. Software systems don’t stand still. By the time we’ve finished our post-mortem, the team composition has changed, two dependencies have been updated, traffic patterns have evolved, and three new features have been deployed. The system we’re analysing no longer exists.

This is why the most confident proclamations—’We’ll never let this happen again’—are often followed by remarkably similar failures. Not because teams are incompetent or negligent, but because they’re trying to apply lessons from System A to System B, when System B only superficially resembles its predecessor. The lesson learnt was ‘don’t deploy configuration changes on Fridays without additional review’, but the next incident happens on a Tuesday with a code change that went through extensive review. Was the lesson wrong? Or was it just irrelevant to the new context?

The Narrative Fallacy

Humans are storytelling machines. When something goes wrong, we instinctively construct a narrative that makes sense of the chaos. We identify villains (the junior developer who made the change), heroes (the senior engineer who diagnosed the issue), and a moral (the importance of code review). These narratives feel true because they’re coherent.

But coherence is not the same as accuracy. In the aftermath of failure, we suffer from hindsight bias—knowing the outcome, we see a clear path from cause to effect that was never actually clear at the time. We say ‘the warning signs were there’ when in reality those same ‘warning signs’ are present all the time without incident. We construct a story that couldn’t have been written before the fact.

This is why war stories in software development are simultaneously compelling and useless. The grizzled veteran who regales you with tales of production disasters is imparting wisdom that feels profound but often amounts to ‘this specific thing went wrong in this specific way in this specific system at this specific time’. And the specifics are rarely defined. The lesson learnt is over-fitted to a single data point.

Emergence and Irreducibility

Complex adaptive systems exhibit emergence—behaviour that arises from the interaction of components but cannot be predicted by analysing those components in isolation – c.f. Synergetics (Buckminster Fuller). Your microservices architecture might work perfectly in testing, under load simulation, and even in production for months. Then one day, a particular sequence of requests, combined with a specific distribution of data across shards, triggers a cascade failure that brings down the entire system.

You can’t ‘learn’ to prevent emergent failures because you can’t predict them. They arise from the system’s complexity itself. Adding more tests, more monitoring, more safeguards—these changes don’t eliminate emergence, they just add new components to the complex system, creating new possibilities for emergent behaviour.

The Adaptation Trap

Here’s the final twist: complex adaptive systems adapt. When you implement a lesson learnt, you’re not just fixing a problem—you’re changing the system. And when the system changes, the behaviours that emerge from it change too.

Add comprehensive monitoring after an outage? Now developers start relying on monitoring as a crutch, writing less defensive code because they know they’ll be alerted to issues. Implement mandatory code review after a bad deployment? Now developers become complacent, assuming that anything that passed review must be safe. The system adapts around your interventions, often in ways that undermine their original purpose.

This isn’t a failure of implementation—it’s a fundamental characteristic of complex adaptive systems. They don’t have stable equilibrium points. Every intervention shifts the system to a new state with its own unique vulnerabilities.

So What Do We Do?

If we can’t learn from our mistakes in any straightforward way, what’s the alternative? Are we doomed to repeat the same failures for ever?

Not quite. The solution is to stop pretending we can extract universal lessons from specific failures and instead focus on building systems that are resilient to the inevitable surprises we can’t predict.

This means designing for graceful degradation rather than preventing all failures. It means building systems that can absorb shocks and recover quickly rather than systems that need to be perfect. It means accepting that production is fundamentally different from any testing environment and that the only way to understand system behaviour is to observe it in production with real users and real data.

It also means being humble. Every post-mortem that ends with ‘we’ve identified the root cause and implemented fixes to prevent this from happening again’ is cosplaying certainty in a domain defined by uncertainty. A more honest conclusion might be: ‘This is what we think happened, given our limited ability to understand complex systems. We’re making some changes that might help, but we acknowledge that we’re also potentially introducing new failure modes we haven’t imagined yet.’

The Productivity of Failure

None of this means that failures are useless. Incidents do provide value—they reveal the system’s boundaries, expose hidden assumptions, and force us to confront our mental models. But the value isn’t in extracting a tidy lesson that we can apply next time. The value is in the ongoing process of engaging with complexity, building intuition through repeated exposure, and developing a mindset that expects surprise rather than seeking certainty.

The developer who has been through multiple production incidents isn’t valuable because they’ve learnt ‘lessons’ they can enumerate. They’re valuable because they’ve internalised a posture of humility, an expectation that systems will fail in ways they didn’t anticipate, and a comfort with operating in conditions of uncertainty.

That’s not the same as learning from mistakes. It’s something both more modest and more useful: developing wisdom about the limits of what we can learn.


The next time you hear someone confidently declare that they’ve learnt from a mistake, especially in a complex domain like software development, be sceptical. Not because they’re lying or incompetent, but because they’re human—and we all want to believe that our suffering has purchased something more substantial than just the experience of suffering. The truth is messier and less satisfying: in complex adaptive systems, the best we can hope for is not wisdom, but the wisdom to know how little wisdom we can extract from any single experience.


Further Reading

Allspaw, J. (2012). Fault injection in production: Making the case for resilience testing. Queue, 10(8), 30-35. https://doi.org/10.1145/2346916.2353017

Dekker, S. (2011). Drift into failure: From hunting broken components to understanding complex systems. Ashgate Publishing.

Dekker, S., & Pruchnicki, S. (2014). Drifting into failure: Theorising the dynamics of disaster incubation. Theoretical Issues in Ergonomics Science, 15(6), 534-544. https://doi.org/10.1080/1463922X.2013.856495

Fischhoff, B. (1975). Hindsight ≠ foresight: The effect of outcome knowledge on judgment under uncertainty. Journal of Experimental Psychology: Human Perception and Performance, 1(3), 288-299. https://doi.org/10.1037/0096-1523.1.3.288

Hollnagel, E., Woods, D. D., & Leveson, N. (Eds.). (2006). Resilience engineering: Concepts and precepts. Ashgate Publishing.

Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.

Kahneman, D., Slovic, P., & Tversky, A. (Eds.). (1982). Judgment under uncertainty: Heuristics and biases. Cambridge University Press.

Leveson, N. G. (2012). Engineering a safer world: Systems thinking applied to safety. MIT Press.

Perrow, C. (1999). Normal accidents: Living with high-risk technologies (Updated ed.). Princeton University Press. (Original work published 1984)

Roese, N. J., & Vohs, K. D. (2012). Hindsight bias. Perspectives on Psychological Science, 7(5), 411-426. https://doi.org/10.1177/1745691612454303

Woods, D. D., & Allspaw, J. (2020). Revealing the critical role of human performance in software. Queue, 18(2), 48-71. https://doi.org/10.1145/3406065.3394867

The Great Listening Crisis of 2347

A Short Story

The thing about the end of human civilization is that nobody was actually paying attention when it happened.

I should know. I was there.

My name is Dr. Sarah Chen, and I’m—well, was—the Chief Communications Officer aboard the interstellar colony ship Probable Cause. Yes, that’s actually what they named it. The Naming Committee thought they were being hilarious. The Naming Committee, incidentally, did not listen to the hundreds of formal objections filed about the name. This could have been our first clue.

The crisis began at 0927 hours on a Tuesday, which is when Captain Morrison called the senior staff meeting to discuss what he termed “a minor navigational anomaly.”

“We’re heading directly into a black hole,” said Lieutenant Park, our navigator, before Morrison had even finished his opening remarks.

“Thank you, Sarah,” Morrison said to me, not making eye contact with Park. “Now, as I was saying, we need to discuss the crew morale initiative—”

“Captain,” Park interrupted, because she hadn’t yet learned that interrupting Morrison was like trying to stop a freight train with a strongly worded letter. “Black. Hole. We have maybe six hours before the gravitational effects become catastrophic.”

“I appreciate your enthusiasm,” Morrison said, still looking at his tablet, “but let’s stay on topic. The crew has been complaining about the quality of the synthetic coffee.”

I watched Park’s face do something that would have been fascinating under different circumstances. It was the precise moment when a competent professional realizes they’re living in a farce.

“Sir,” she tried again, her voice tight. “I’m telling you we’re going to die.”

“And I’m telling you,” Morrison said, finally looking up with that practiced expression of patient condescension that probably worked great at his TED talk, “that we need to maintain crew morale. Now, Raj, what’s the status on the coffee situation?”

Chief Engineer Patel looked up from his phone. “Sorry, what?”

“The coffee,” Morrison repeated.

“Oh, yeah, totally.” Patel nodded vigorously. He had clearly not been listening and was just agreeing so as to get back to his phone. “We’ll definitely get right on that.”

“Excellent,” Morrison said.

Park tried to interrupt three more times. Each time, someone spoke over her. By the fourth attempt, she just stood up and left. Morrison didn’t notice. He was too busy explaining his vision for a crew talent show.

In the hallway, I caught up with Park.

“I heard you,” I said.

She spun around. “Did you? Did you actually hear me, or are you just saying the words?”

It was a fair question. I thought about it. Had I really heard her? Or had I just detected the sounds coming from her mouth while I was busy thinking about what I was going to say next?

“Six hours?” I asked.

“Give or take.”

“What do we need to do?”

“I need access to the main navigation controls. But Morrison locked me out after I ‘exceeded my authority’ by trying to change course without his approval.”

“When did you try to change course?”

“Fifteen minutes ago. While he was ‘actively listening’ to Jenkins complain about the smell in the recycling bay.”

I pulled up my communicator. “Let me call—”

“I already sent seventeen urgent messages to the bridge crew,” Park said. “Martinez replied ‘lol.’ Thompson sent back a thumbs up. Wilson told me he was really focused on being present in the moment and couldn’t deal with my negative energy right now.”

“Wilson’s been doing that mindfulness course,” I said.

“I noticed.” Park’s voice was flat. “Very present. Very in the moment. Presently about to be in the moment when we cross the event horizon.”

We tried the direct approach next. Park and I went to the bridge together. We brought charts. We brought data. We brought a simulation that literally showed the ship being spaghettified.

Commander Oakes listened politely while checking his messages. When Park finished, he nodded thoughtfully.

“That’s really interesting,” he said. “But have you considered that maybe the black hole is a metaphor?”

“A metaphor for what?” Park’s voice had reached a pitch I didn’t know human vocal cords could achieve.

“For the darkness we all carry inside us,” Oakes said. “I’ve been reading this amazing book about—”

“It’s not a metaphor!” Park shouted. “It’s an actual goddamn black hole! With actual goddamn gravity! That is actually going to actually kill us!”

“I hear that you’re feeling frustrated,” Oakes said in the tone people use when they’ve just learned about active listening but understood none of it. “And I want you to know that your feelings are valid.”

“My feelings?” Park looked like she might actually explode.

“I’m sensing a lot of anger,” Oakes continued, making concerned eye contact while obviously thinking about something else. “Have you tried the meditation app I recommended?”

That’s when Park grabbed the fire extinguisher.

I managed to stop her before she brained him with it, but it was a close thing.

“Five hours,” she told me, breathing hard. “We have five hours left and nobody will listen for five consecutive seconds.”

I had an idea. It was a terrible idea, but all the good ideas required people to actually listen, so terrible was what we had left.

“The PA system,” I said. “Ship-wide announcement.”

“Morrison will just talk over it.”

“Not if we lock him in his quarters first.”

Park smiled for the first time that day. It was not a reassuring smile.

Fifteen minutes later, Morrison was “temporarily confined for his own safety” (he’d been practicing his juggling for the talent show and kept dropping the balls), and I had access to the PA system.

“Attention all crew,” I said, my voice echoing through every corridor of the ship. “This is Dr. Chen. Stop what you’re doing and listen. Not listening-to-reply. Not fake listening. Not listening while you think about what you’re going to have for lunch. Actually listening. Because in four hours and forty-two minutes, we’re going to die.”

I paused. On the security monitors, I could see people stopping, looking up at the speakers.

“Lieutenant Park has been trying to tell us this all morning. None of us listened. Not really. We heard sounds coming out of her mouth and we thought ‘that’s nice’ and went back to our own thoughts. We nodded and said ‘uh-huh’ and didn’t process a single word. We interrupted. We changed the subject. We got defensive. We did everything except actually listen.”

Another pause. More people were stopping now.

“Here’s what’s alive in Lieutenant Park right now,” I continued, borrowing from the backgrounder I’d read last year about NVC listening. “She’s desperate. She’s terrified. And she knows exactly how to save us if we’ll just give her the chance.”

I switched the PA over to Park.

Her voice was steady now. Clear. “We need to execute an emergency burn in four hours and thirty-eight minutes. Not before—we won’t have enough power. Not after—we’ll be past the point of no return. Chief Patel, I need you to redirect all non-essential power to the engines. Dr. Yamamoto, I need the medical bay on standby for potential G-force injuries. Martinez, I need you to stop texting and actually pilot the ship when I give the word.”

Silence.

Then: “Copy that, Lieutenant.” Patel’s voice, serious for once.

“Medical bay standing by.” Yamamoto.

“Phone’s off. Ready when you are.” Martinez.

One by one, the crew responded. Actually responded. Actually listened.

We executed the burn at exactly the right moment. The Probable Cause shuddered, groaned, and pulled away from the black hole event horizon with about twelve hundred kilometers to spare.

Later, after the excitement died down and we’d checked that everyone survived with all their limbs intact and in the right places, I found Park in the observation deck, staring at the stars.

“Thank you,” she said. “For listening.”

“I should have listened the first time,” I said. “We all should have.”

“Yeah, well.” She shrugged. “We’re human. We’re kind of terrible at it.”

“We could get better.”

“We could.” She turned to look at me. “Think we will?”

The next morning, Captain Morrison called a meeting to discuss implementing a new “active listening protocol.” He talked for forty-five minutes straight without letting anyone else speak. Half the senior staff was checking their phones. Oakes kept trying to bring up the subject of his meditation app.

I caught Park’s eye across the table. She raised an eyebrow.

“Black hole?” I mouthed silently.

She checked her console and shook her head. Then she paused, looked again, and her eyes went wide.

“Captain,” she said.

Morrison kept talking.

“Captain,” she said louder.

He held up one finger in a “wait a moment” gesture and continued his explanation of the importance of really hearing what people are saying.

“CAPTAIN!” Park shouted.

“Please don’t interrupt, Lieutenant. I’m trying to make an important point about listening.”

Park looked at me. I looked at the fire extinguisher still sitting in the corner from yesterday.

“You know what?” Park said, standing up. “Never mind. Forget I said anything.”

“Thank you,” Morrison said. “Now, as I was saying about the art of truly hearing another person—”

I give us six hours. Maybe seven.

But at least we’ll go down proving that humanity’s greatest skill has always been its absolutely remarkable ability to not pay attention to anything that matters.

 

THE END

Is Leadership the Answer?

Do you assume that leadership is a positive thing? What are the consequences of that assumption?

Resetting: An Invitation to Own What Comes Next

I haven’t published here in quite some time. Not from lack of ideas—if anything, the opposite. After over 50 years of studying and practising software development, management and organisational dynamics, I have more to say than ever about the human dimensions of our work. But I’ve realised I can approach this better.

I was writing from what I thought was important, what I wanted to explore, what I believed needed saying. And whilst there’s nothing inherently wrong with that, it misses something fundamental that I’ve spent decades learning: ownership matters, and invitation is how ownership happens.

The Gap Between Writing and Connection

Here’s what I’ve come to understand: when I write from my agenda alone, I’m imposing a curriculum. I might be right about what’s valuable or useful, but rightness isn’t the point. The point is whether what I’m offering connects with where you are, what you’re wrestling with, what you’re ready to explore.

This maps directly to what I’ve learnt about organisations. Whether you’re leading a development team, managing a department, or setting strategy at the executive level, you’re navigating social complexity. Organisations operate on collective assumptions and beliefs that are often invisible:

  • What constitutes good work
  • How decisions really get made
  • Who gets to challenge the status quo
  • What trade-offs are acceptable
  • What problems are worth solving
  • How people might better relate to each other across hierarchies
  • Etc.

These assumptions shape everything, but they’re rarely examined because no one thinks to invite that examination.

And here’s the ironic part: I’ve been doing the same thing with this blog. Operating on my assumptions about what you needed, never actually inviting you into the conversation about what this space could be.

Ironic, given that for 20 years I’ve been emphasising the Antimatter Principle—attending to people’s actual needs rather than our assumptions about them. Apparently I still have things to learn about practising what I preach. This is exactly the kind of blind spot that self-awareness is supposed to catch, and it took my recent hiatus for me to reconnect with the principle.

The Reset

So here’s what I’m proposing—really, what I’m inviting:

Tell me what you want to explore.

Not just topics, though those matter. Also formats and media. Do you want:

  • Short reflections or deep dives?
  • Case studies from real organisations or conceptual frameworks?
  • Dialogues and Q&A or essays?
  • Written posts, recorded conversations, podcasts, video shorts, or something else entirely?

I’m super interested in what you’re actually curious about. What challenges are you facing—whether that’s:

  • Team dynamics and collaboration in software development
  • Middle management’s squeeze between strategic directives and ground-level realities
  • Executive decisions about culture, structure, and organisational transformation
  • The gap between what leadership espouses and what actually happens
  • How self-awareness (individual and collective) shapes organisational outcomes
  • Navigating technical decisions with human implications
  • Making sense of resistance, politics, and power

And here’s the question that matters most: How can my experiences and insights from a long career help you?

What are you trying to understand? What are you trying to change? What patterns have you noticed but can’t quite name yet?

There are over 1500 posts in the archives here. (and my books and white papers, too). Feel free to mine them for ideas—topics you’d like to see revisited, expanded, updated, or challenged. What sparked something for you but for which you need more exploration? What made sense years ago but feels different now? What concepts need translating for today’s context?

Why This Matters

This isn’t about making the blog more ‘user-friendly’. It’s about something much deeper.

When you own the direction of your learning—when you’re invited to shape what we explore together rather than passively receiving what I decide to present—something shifts. You engage differently. You bring your own experiences into dialogue with what’s offered. You’re more likely to actually use what we discuss because it’s connected to your genuine needs and curiosity, not my assumptions about what you could be asking.

And just as importantly: I’ll learn from what you ask for. Your questions will reveal the collective assumptions and challenges in organisations right now. Your format preferences will show me how people are actually trying to integrate these ideas into their work. This becomes a genuine exchange, not a broadcast. Seems more in tune with the current zeitgeist?

The Invitation

So: What do you want to explore about the human dimensions of work—in software development, in management, in organisational life?

What problems are you facing that feel stubborn or invisible? What assumptions have you started to question? Where do you see the gap between what people say matters and what actually drives decisions? What have you noticed about how self-awareness—yours, your team’s, your organisation’s—changes what’s possible?

What format would actually be useful to you? And how can my half-century of experience serve what you’re trying to learn or accomplish?

You can reach me:

I’ve been thinking about what I want to say for a while now. I’m more interested in learning what you want to explore.

Let’s see what we can discover together!

Further Reading

Argyris, C. (1991). Teaching smart people how to learn. Harvard Business Review, 69(3), 99–109.

DeMarco, T., & Lister, T. (1999). Peopleware: Productive projects and teams (2nd ed.). Dorset House.

Marshall, R. W. (2019). Hearts over diamonds: Serving business and society through organisational psychotherapy. Leanpub.

Marshall, R. W. (2021a). Memeology: Surfacing and reflecting on the organisation’s collective assumptions and beliefs. Leanpub.

Marshall, R. W. (2021b). Quintessence: An acme for highly effective software development organisations. Leanpub.

Schein, E. H. (2010). Organizational culture and leadership (4th ed.). Jossey-Bass.

Senge, P. M. (2006). The fifth discipline: The art and practice of the learning organization (Rev. ed.). Currency/Doubleday.

Weinberg, G. M. (1992). Quality software management: Vol. 1. Systems thinking. Dorset House.

This Company Eliminated All Managers and Turned Every Product Team Into a Profitable Startup

Whilst most tech companies are debating the merits of OKRs versus alternative goal-setting frameworks, a Chinese appliance manufacturer has been quietly running one of the world’s most radical organisational experiments. Haier’s “Rendanheyi” model—which translates roughly to “inverted triangle of individual-goal combination”—has transformed a near-bankrupt refrigerator company into a global giant by eliminating traditional management and replacing it with market-driven entrepreneurship.

But could this model work for software development organisations? The answer might surprise you.

What Is Rendanheyi?

Rendanheyi flips traditional corporate structure on its head. Instead of hierarchical departments managed through cascading objectives, Haier operates as an ecosystem of thousands of small, autonomous business units called “self-managed teams” or SMTs. Each unit:

  • Operates as an independent business with full P&L responsibility
  • Serves real customers (either external customers or other internal units)
  • Buys and sells services from other units at market rates
  • Shares in financial outcomes through profit-sharing and ownership stakes
  • Has direct access to resources without middle management gatekeeping

The result? No middle managers, no annual planning cycles, no OKRs—just market forces driving alignment and accountability.

Why Traditional Goal-Setting Falls Short in Software Development

Before exploring how Rendanheyi might apply to software organisations, it’s worth acknowledging why conventional approaches often struggle:

The Innovation Problem: OKRs assume you can predict what success looks like. But breakthrough software products often emerge from experimentation that defies predetermined key results.

The Measurement Trap: Software development involves significant creative and problem-solving work that’s difficult to capture in quarterly metrics. Teams often end up optimising for what’s measurable rather than what’s valuable.

The Speed Penalty: Complex goal-setting processes add overhead and delay decision-making in an industry where speed to market is crucial.

The Scaling Challenge: As software organisations grow, maintaining alignment through cascading objectives becomes increasingly bureaucratic and disconnected from actual customer value.

Rendanheyi in Software: The Natural Fit

Software development organisations might actually be ideal candidates for Rendanheyi-style transformation, for several reasons:

Digital Products Enable Clean Unit Separation

Unlike manufacturing, where supply chains create complex interdependencies, software products can often be cleanly separated into distinct services, features, or platforms. A music streaming service, for example, could organise around units like:

  • Recommendation Engine Team (sells personalised playlists to other units)
  • Audio Infrastructure Team (provides streaming services to product teams)
  • Mobile Experience Team (builds customer-facing apps, buys services from backend teams)
  • Analytics Platform Team (sells data insights to all other units)

Natural Market Pricing Mechanisms

Software organisations already use internal concepts that mirror market pricing:

  • Infrastructure costs (AWS bills, computational resources)
  • Engineering time (sprint capacity, story points)
  • User engagement metrics (DAU, retention, conversion rates)

These existing metrics could form the basis of an internal market where teams buy and sell services from each other.

Rendanheyi Through the Lens of Quintessence

The “Quintessence” framework—which describes the ideal software development organisation—provides a compelling lens through which to evaluate Haier’s model. The alignment is remarkably strong, suggesting that Rendanheyi might represent one of the closest real-world implementations of quintessential organisational principles.

Strong Convergence Areas

Elimination of Traditional Management: Both Rendanheyi and Quintessence completely reject traditional hierarchical management. The quintessential organisation has “no managers” and emphasises that people doing the work should own “the way the work works.” Haier’s elimination of middle management and direct connection between autonomous units mirrors this perfectly.

Flow Over Silos: Quintessence emphasises horizontal value streams rather than vertical departmental structures. Haier’s approach of organising around customer-facing business units rather than functional departments aligns with this principle of organising around value flow rather than organisational convenience.

Trust and Autonomy: Both frameworks treat people as capable adults rather than resources to be controlled. Theory-Y assumptions about human nature—that people find joy in collaborative work and naturally take ownership—align with Haier’s trust in unit leaders to make entrepreneurial decisions.

Distributed Decision-Making: Quintessence advocates for the “Advice Process” and pushing decisions to where information originates. Haier’s model of autonomous decision-making within market constraints serves a similar function.

Key Tensions and Philosophical Differences

Market Mechanisms vs. Needs-Based Coordination: This represents the most interesting tension. Haier uses internal markets and P&L responsibility as coordination mechanisms, whilst Quintessence emphasises collaborative attention to “folks’ needs” through dialogue and consensus. However, these approaches might be more complementary than contradictory—internal markets serve as a mechanism for surfacing and meeting stakeholder needs.

Individual vs. Collective Accountability: Haier emphasises individual entrepreneurial ownership within small units, whilst Quintessence explicitly prefers group accountability and rejects “single wringable neck” approaches. Though Haier’s focus on small teams (10-15 people) somewhat bridges this gap.

Financial Incentives: Haier relies heavily on profit-sharing and market-based rewards, whilst Quintessence views extrinsic motivation sceptically, preferring intrinsic motivation through purpose, mastery, and autonomy. This represents a fundamental difference in assumptions about what motivates human behaviour.

Assessment: 75-85% Alignment

Haier achieves remarkable alignment with Quintessence principles—perhaps 75-85%—which is extraordinary for any large-scale implementation. The core philosophy of trusting people, eliminating bureaucracy, focusing on customer value, and enabling self-organisation aligns strongly.

The main gaps centre around Quintessence’s emphasis on nonviolence, psychology, and broader stakeholder consideration beyond customers and profits. Haier’s internal market mechanisms, whilst effective, might create competitive pressures that Quintessence would view as potentially harmful to interpersonal relationships.

Learning from Toyota’s Chief Engineer Model

Toyota’s Chief Engineer (CE) system offers another organisational model worth considering alongside Rendanheyi. The CE has complete responsibility for a vehicle programme from conception through production, coordinating across functional departments without formal authority over the specialists involved.

Key aspects of Toyota’s model that complement Rendanheyi thinking:

Cross-Functional Integration: The CE integrates expertise from multiple disciplines—similar to how Haier’s business units must coordinate across traditional functional boundaries.

Responsibility Without Authority: The CE must influence and coordinate without commanding—developing skills in consensus-building and collaborative decision-making that would serve software organisations well.

Long-Term Product Ownership: Unlike project-based structures, the CE maintains responsibility throughout the product lifecycle, similar to how Haier units maintain ongoing customer relationships.

Market-Driven Decisions: The CE makes trade-offs based on customer needs and market constraints rather than internal politics or resource optimisation.

For software organisations, a hybrid approach might combine Haier’s entrepreneurial autonomy with Toyota’s integration model—autonomous product teams with designated integrators responsible for cross-team coordination and long-term product vision.

Built-in Customer Feedback Loops

Software development organisations already have rapid feedback mechanisms through user analytics, A/B testing, and deployment metrics. This real-time customer data will drive market dynamics more effectively than quarterly OKR reviews.

A Rendanheyi Software Organisation in Practice

Imagine a mid-sized SaaS company reorganising around Rendanheyi principles:

Team Structure

Instead of traditional engineering, product, and design departments, the company forms customer-focused units:

  • Onboarding Experience Squad (responsible for new user activation)
  • Core Platform Team (provides APIs and infrastructure services)
  • Enterprise Features Unit (builds B2B functionality)
  • Growth Engine Team (drives user acquisition and retention)

Internal Market Dynamics

Teams operate on market principles:

  • The Growth Engine Team “buys” A/B testing infrastructure from the Core Platform Team at rates based on computational costs and engineering time
  • The Enterprise Features Unit “pays” the Onboarding Squad based on how many enterprise customers successfully complete setup
  • Teams can choose to build internally or “buy” from other teams based on cost and quality

Profit and Loss Responsibility

Each unit tracks financial performance:

  • Revenue attribution: Customer subscription revenue is attributed to teams based on feature usage and customer feedback
  • Cost allocation: Infrastructure, support, and development costs are allocated based on actual resource consumption
  • Profit sharing: Teams share in the financial success of their contributions

Autonomous Decision Making

Teams make independent choices about:

  • Technology stack and architecture decisions
  • Hiring and team composition
  • Feature priorities based on customer value
  • Whether to build, buy, or partner for new capabilities

The Benefits for Software Development

This approach could solve several persistent challenges in software organisations:

Faster Innovation: Teams can experiment and pivot without waiting for approval or updating company-wide roadmaps.

Better Customer Focus: When teams’ success is directly tied to customer value rather than internal metrics, product decisions improve.

Natural Scaling: As the organisation grows, new teams can form organically around customer needs rather than requiring top-down reorganisation.

Reduced Bureaucracy: No need for complex planning processes, alignment meetings, or quarterly business reviews.

Talent Retention: Engineers and designers get entrepreneurial ownership and direct impact on business outcomes.

The Challenges and Considerations

However, implementing Rendanheyi in software development isn’t without significant challenges:

Technical Architecture Requirements

Software systems would need to be architected for independence:

  • Microservices architecture to enable teams to deploy independently
  • Clear API boundaries to facilitate internal markets
  • Robust monitoring and billing to track resource usage across teams

Cultural Transformation

The shift from employee to entrepreneur mindset is profound:

  • Risk tolerance: Team members must be comfortable with financial accountability
  • Collaboration vs. competition: Internal markets could create unhealthy competition between teams
  • Leadership development: Teams need entrepreneurial skills, not just technical expertise

Measurement and Pricing Complexity

Creating fair internal markets is challenging:

  • How do you price shared infrastructure like security, compliance, or platform services?
  • What happens to teams working on foundational technology that doesn’t directly drive revenue?
  • How do you handle cross-team dependencies that don’t fit clean market models?

Regulatory and Compliance Constraints

Software companies often face regulatory requirements that don’t align with fully autonomous units:

  • Data privacy regulations may require centralised oversight
  • Security standards might mandate consistent practices across teams
  • Financial reporting may require traditional departmental structures

Implementation Strategies for Software Organisations

For software companies interested in experimenting with Rendanheyi principles, here are some practical starting points:

Start with Product Teams

Begin by giving product development teams more autonomy and P&L visibility. Let them make technology choices, prioritise features based on customer data, and share in the financial outcomes of their work.

Create Internal Service Markets

Identify shared services (infrastructure, design systems, analytics) and experiment with market-based allocation. Let teams choose between building internally or “buying” from shared service teams.

Implement Transparent Cost Allocation

Make infrastructure costs, engineering time, and other resources visible to teams. Start charging teams for their actual resource consumption rather than treating these as free shared resources.

Develop Customer-Centric Metrics

Move beyond engineering metrics (story points, velocity) to customer value metrics (feature adoption, customer satisfaction, revenue attribution).

Experiment with Team Formation

Allow teams to form organically around customer problems or business opportunities rather than maintaining static organisational boundaries.

The Future of Software Organisation

Rendanheyi represents a fundamentally different approach to organisational design—one that treats internal operations like external markets and replaces management hierarchy with entrepreneurial ownership. For software development organisations facing scaling challenges, innovation bottlenecks, and talent retention issues, it offers a compelling alternative to traditional approaches.

Whilst full implementation requires significant commitment and cultural change, the principles behind Rendanheyi—customer focus, market accountability, entrepreneurial ownership, and autonomous decision-making—can be adopted incrementally by forward-thinking software organisations.

The question isn’t whether your next quarterly planning cycle should use OKRs or another goal-setting framework. The question is whether you’re ready to move beyond goal-setting entirely and trust market forces to drive alignment and performance.

For software organisations willing to make this leap, the rewards could be transformational: faster innovation, better customer outcomes, and a more engaged, entrepreneurial workforce that thinks like owners because they actually are owners.

The future of software development might not look like traditional corporate hierarchies at all. It might look more like a marketplace of entrepreneurial teams, competing and collaborating to create customer value. And that future might be closer than we think.

Further Reading

Hamel, G. (2011). The big idea: First, let’s fire all the managers. Harvard Business Review, December 2011.

Hamel, G., & Zanini, M. (2020). Humanocracy: Creating organizations as amazing as the people inside them. Harvard Business Review Press.

Marshall, R. W. (2021). Quintessence: An acme for software development organisations. Falling Blossoms. [Available on Leanpub]

Hamel, G., & Zanini, M. (2018). The end of bureaucracy. Harvard Business Review, November-December 2018.

Kennedy, M. N. (2003). Product development for the lean enterprise: Why Toyota’s system is four times more productive and how you can implement it. Oaklea Press.

Pink, D. H. (2009). Drive: The surprising truth about what motivates us. Riverhead Books.

Reinertsen, D. G. (2009). The principles of product development flow: Second generation lean product development. Celeritas Publishing.

Sobek, D. K., Ward, A. C., & Liker, J. K. (1999). Toyota’s principles of set-based concurrent engineering. MIT Sloan Management Review, 40(2), 67-83.

Why Curiosity Beats Shame in Software Retrospectives

There’s a moment in therapy that therapists call ‘the shift’—when you stop drowning in your patterns and start watching them with fascination. You realise you’ve been having the same argument with your partner for three years, and instead of feeling like a broken record, you start laughing. ‘Oh, there I go again, catastrophising about the dishes.’ The pattern doesn’t vanish overnight, but something fundamental changes: you’re no longer at war with yourself.

What if software teams could experience this same shift?

The Drama We Know By Heart

Every team has their recurring drama. Maybe it’s the sprint planning meeting that always runs two hours over because nobody can agree on story points. Perhaps it’s the deployment Friday that inevitably becomes deployment Monday because ‘just one small thing’ broke. Or the code review discussions that spiral into philosophical debates about variable naming or coding standards more generally, whilst the actual logic bugs slip through unnoticed.

We know these patterns intimately. We’ve lived them dozens of times. Yet most teams approach retrospectives like a tribunal, armed with post-its and grim determination to ‘fix our dysfunction once and for all.’ We dissect our failures with the energy of surgeons operating on ourselves, convinced that enough shame and analysis will finally make us different people.

But what if we’re approaching this backwards?

The Mice Would Find Us Fascinating

Douglas Adams had it right when he suggested that mice might be the truly intelligent beings, observing human behaviour with scientific curiosity. Imagine if we could watch our team dynamics the way those hyperintelligent mice observe us—with detached fascination rather than existential dread.

‘Interesting,’ the mice might note. ‘When the humans feel time pressure, they consistently skip the testing phase, then spend three times longer fixing the resulting problems. They repeat this behaviour with remarkable consistency, despite claiming to have “learned their lesson” each time.’

The mice wouldn’t judge us. They’d simply observe the pattern, maybe take some notes, perhaps adjust their experiment parameters. They wouldn’t waste energy being disappointed in human nature.

The Science of Predictable Irrationality

Behavioural economists like Dan Ariely have spent decades documenting how humans make decisions in ways that are wildly irrational but remarkably consistent. We’re predictably bad at estimating time, systematically overconfident in our abilities, and reliably influenced by factors we don’t even notice. These aren’t bugs in human cognition—they’re features that served us well in evolutionary contexts but create interesting challenges in modern day work environments.

Software teams exhibit these same patterns at scale. We consistently underestimate complex tasks (planning fallacy), overvalue our current approach versus alternatives (status quo bias), and make decisions based on whoever spoke last in the meeting (recency effect). The beautiful thing is that once you name these patterns, they become less mysterious and more laughable.

Curiosity as a Debugging Tool

When we approach our team patterns with curiosity instead of judgement, something magical happens. The defensive walls come down. Instead of ‘Why do we always screw this up?’ we start asking ‘What conditions reliably create this outcome?’

This shift from shame to science transforms retrospectives from group therapy sessions into collaborative debugging. We’re not broken systems that need fixing—we’re complex systems exhibiting predictable behaviours under certain conditions. Complex systems can be better understood through observation, and sometimes influenced through small experiments, though the outcomes are often unpredictable

Consider the team that always underestimates their stories. The shame-based approach produces familiar results: ‘We need to be more realistic about our estimates.’ (Spoiler alert: they won’t be.) The curiosity-based approach asks different questions: ‘What happens right before we make these optimistic estimates? What information are we missing? What incentives and other factors are shaping our behaviour?’

The Hilariously Predictable Humans

Once you start looking for patterns with curiosity, they become almost endearing. The senior developer who always says ‘this should be quick’ right before disappearing into a three-day rabbit hole. The product manager who swears this feature is ‘simple’ whilst gesturing vaguely at convoluted requirements that would make a vicar weep. The team that collectively suffers from meeting amnesia, forgetting everything discussed five seconds after the meeting ends.

These aren’t character flaws to be eliminated. They’re what Dan Ariely would call ‘predictably irrational’ behaviours—systematic quirks in how humans process information and make decisions. The senior developer genuinely believes it will be quick because they’re anchored on the happy path scenario (classic anchoring bias). The product manager sees simplicity because they’re viewing it through the lens of user experience, not implementation complexity (curse of knowledge in reverse). The team forgets meeting details because our brains are optimised for pattern recognition, not information retention across context switches.

We’re not broken. We’re just predictably, irrationally human.

Practical Curiosity: Retrospective Questions That Transform

Instead of ‘What went wrong this sprint?’ you might like to try:

  • ‘What hilariously predictable human things did we do again?’
  • ‘If we were studying ourselves from the outside, what would be fascinating about our behaviour?’
  • ‘What patterns are we executing so consistently that we could almost set our watches by them?’
  • ‘Under what conditions do we make our most questionable decisions?’
  • What shared assumptions inevitably led to this sprint’s outcomes?
  • ‘What would the mice find interesting about how we work?’

These questions invite observation rather than judgement. They make space for laughter, which is the enemy of shame. And they reduce the role of shame—the antithesis of learning.

The Liberation of Accepting Our Programming

Here’s the paradox: accepting our patterns makes them easier to change. When we stop fighting our humanity and start working with it, we find leverage points we never noticed before.

The team that always underestimates might not become perfect estimators, but they can build buffers into their process (Cf. TOC). The developer who disappears into rabbit holes can set timers and check-in points (such as Pomodoros). The product manager can be paired with someone who thinks in implementation terms.

We don’t have to become different people. We just have to become people who understand ourselves better.

AI as a Curiosity Amplifier

Here’s where artificial intelligence might genuinely help—not as a problem-solver, but as a curiosity amplifier. AI excels at exactly the kind of pattern recognition that’s hard for humans trapped inside their own systems.

Pattern Recognition Beyond Human Limits

AI could spot correlations across longer timeframes than teams naturally track. Perhaps story underestimation always happens more, or less, after certain types of client calls, or when specific team members are on holiday. Maybe over-architecting solutions correlates with unclear requirements, or planning meetings grow longer when the previous sprint’s velocity dropped.

These are the kinds of subtle, multi-factor patterns that human memory and attention struggle with, but that could reveal fascinating insights about team behaviour.

Systematic Curiosity Drilling

More intriguingly, AI could help teams ask better layered questions: ‘We always over-architect when requirements are vague → What specific types of vagueness trigger this? → What makes unclear requirements feel threatening? → What would need to change to make simple solutions feel safe when requirements are evolving?’

This is the kind of systematic curiosity that therapists use—moving from ‘this is problematic’ to ‘this is interesting, let’s understand the deep logic.’ AI could be brilliant at sustaining that investigation without getting distracted or defensive.

The Crucial Cautions

But here’s what AI absolutely cannot do: the therapeutic shift itself. The moment of laughing at your patterns instead of being tormented by them? That’s irreplaceably human. AI risks creating surveillance anxiety—the sense that someone (or something) is always watching and judging.

There’s also the fundamental risk of reinforcing the very ‘fix the humans’ mentality this approach seeks to avoid. AI pattern recognition could easily slide back into ‘here are your dysfunctions, now optimise them away.’

The sweet spot might be AI as a very patient, non-judgmental research assistant—helping teams investigate their own behaviour more thoroughly. The humans still have to do the laughing, the accepting, and the choosing. But AI could make the curiosity richer and more evidential.

Just remember: the mice observed the humans with detached fascination, not with algorithms for improvement.

The Recursive Gift

The most beautiful part of this approach is that it’s recursive. Once your team learns to observe its patterns with curiosity, you’ll start applying this same gentle scrutiny to your retrospectives themselves. You’ll notice when you slip back into judgement mode and laugh about it. You’ll develop patterns for catching patterns.

You’ll become a team that’s as interested in how you think as in what you build. And that might be the most valuable code you ever debug.

The Pattern That Doesn’t Disappear

Your recurring drama won’t vanish. The sprint planning will probably still run long sometimes. The ‘quick fix’ will occasionally become a weekend project. But your relationship to these patterns will transform. You’ll work on them without the crushing weight of believing you should be different than you are.

And in that space—between pattern and judgement, between observation and criticism—you’ll find something remarkable: the room to actually change.

The mice would be proud.


Further Reading

Adams, D. (1979). The hitchhiker’s guide to the galaxy. Harmony Books.

Ariely, D. (2008). Predictably irrational: The hidden forces that shape our decisions. Harper.

Netó, D., Oliveira, J., Lopes, P., & Machado, P. P. (2024). Therapist self-awareness and perception of actual performance: The effects of listening to one recorded session. Research in Psychotherapy: Psychopathology, Process and Outcome, 27(1), 722. https://doi.org/10.4081/ripppo.2024.722

Williams, E. N. (2008). A psychotherapy researcher’s perspective on therapist self-awareness and self-focused attention after a decade of research. Psychotherapy Research, 18(2), 139-146.

Just Who is this Guy (FlowChainSensei) Anyway? And Why is He Qualified to Comment on Agile Software Development?

Claude and I wrote me a new bio.

Image

In the world of software development discourse, few voices are as provocative—or as polarising—as the one behind the handle FlowChainSensei. If you’ve spent any time in Agile circles online, you’ve likely encountered his scathing critiques of Agile software development practices. But who exactly is this mysterious figure who claims that ‘40 million brilliant minds‘ are now ‘spending their days in fruitless stand-ups and retrospectives’ and that ‘Agile has zero chance of delivering on its promises’?

Meet Bob Marshall: The Organisational AI Therapist

FlowChainSensei is the online persona of Bob Marshall, who currently describes himself as an ‘Organisational AI therapist’. This isn’t just a catchy title—Bob brings serious credentials that explain why his voice carries weight in discussions about the future of both software development and organisational effectiveness.

Backgrounder

Bob’s career trajectory reveals the depth of his software development expertise. His first 20 years were spent in the trenches as a developer, analyst, designer, architect and code troubleshooter—roles that gave him intimate knowledge of how software actually gets built and where things go wrong. This hands-on experience was followed by some 15 years helping a multitude of clients improve their software development approaches, before evolving into his current therapeutic practice.

This progression from practitioner to consultant to therapist reflects an increasingly sophisticated understanding of where the real problems lie in software development—not in the technical details, but in the human and organisational systems that create the context for technical work.

Five Decades in the Vanguard

Bob’s most compelling qualification is his longevity and verifiable involvement in the field. With 53 years in software development—including creating back in 1994 some practices that later became known as Agile—he has demonstrable evidence of being in the thick of it and the vanguard even before Agile happened.

During the period from 1994-2000, Bob was instrumental in creating what he calls ‘European Agile’ and created the Javelin software development method. This wasn’t someone learning about Agile from a certification course—Bob has documentation, project records, and verifiable traces of his involvement in developing the foundational practices years before the Agile Manifesto was even written.

From Agile Pioneer to Organisational Psychotherapist

What sets Bob apart from other Agile worthies is his evolution beyond traditional consulting approaches. He spent fours years as founder and CEO of Familiar, the first 100% Agile software house in Europe, but that was decades ago. He hasn’t been a consultant for over 20 years.

Instead, Bob developed what he calls ‘Organisational Psychotherapy’—bringing psychotherapy techniques out of the therapy room and into the organisation as a whole. He’s documented this approach extensively in his book Hearts over Diamonds: Serving Business and Society through Organisational Psychotherapy (Marshall, 2019).

The Therapeutic Alliance: Why Relationships Trump Solutions

Bob’s approach inverts everything most people expect from organisational change work. Where consultants diagnose problems and provide solutions, Bob creates space for organisations to surface their own unconscious assumptions. The key insight: it’s the therapeutic relationship itself that enables change, not any specific techniques or frameworks.

This relationship-centred approach explains why his work feels ‘alien’ to most business frameworks. People can’t categorise it as consulting, coaching, training, or change management because it operates from completely different assumptions about how transformation happens. As Bob notes in his therapeutic practice: voluntary participation is fundamental—nobody can be forced into genuine therapeutic engagement.

Organisational Cognitive Dissonance: The Hidden Driver of Readiness

One of Bob’s notable contributions is his analysis of organisational cognitive dissonance—what happens when organisations simultaneously hold incompatible belief systems. In his seminal 2012 post on OrgCogDiss, he explains how this internal tension creates the conditions for genuine change.

Unlike external pressure, which organisations can often rationalise away, cognitive dissonance is internal and harder to dismiss. Bob observed that this dissonance typically resolves itself within a nine-month half-life—but often not in the direction change agents hope for. Organisations either fully adopt new approaches or revert to old patterns, but they can’t sustain internal contradictions indefinitely.

This insight explains why so many transformation efforts create enormous organisational pain but ultimately fail. The dissonance gets resolved through exits and resistance rather than genuine adoption, leaving organisations depleted but not actually transformed.

The Memeplex Problem: Why Piecemeal Change Fails

Bob’s work on ‘memeplexes’—interlocking systems of organisational beliefs—reveals why most change initiatives fail. You can’t swap out individual beliefs when they’re part of an interlocking system. Trying to introduce ‘self-organisation’ into a command-and-control organisation without addressing the entire supporting structure of beliefs about authority, expertise, and planning just creates internal contradictions.

He explores this concept further in his book Memeology: Surfacing and Reflecting on the Organisation’s Collective Assumptions and Beliefs (Marshall, 2021). Most failed transformations are attempts to graft elements from one memeplex onto another incompatible one, creating the very cognitive dissonance that eventually leads to rejection of the new elements.

Beyond Agile: The Quintessence Alternative

Bob isn’t just a critic—he’s developed alternatives. His framework ‘Quintessence’, detailed in his book Quintessence: An Acme for Highly Effective Software Development Organisations (Marshall, 2021), represents what he calls ‘the radical departure from Agile norms, based as it is on people-oriented technologies such as sociology, group dynamics, psychiatry, psychology, psychotherapy, anthropology, cognitive science and modern neuroscience’.

But true to his therapeutic approach, Bob doesn’t push Quintessence as a solution to be implemented. Instead, he creates conditions where organisations might naturally evolve toward more effective ways of working based on their own insights and readiness.

The Evidence Question: Why Facts Don’t Change Minds

Bob makes a provocative observation about the role of evidence in organisational change: assertions often carry more weight than verifiable facts because ‘nobody’s opinion is swayed by evidence’. This isn’t cynicism—it’s recognition that evidence gets interpreted within existing paradigms until something else creates readiness for change.

Drawing on Thomas Kuhn’s work on paradigm shifts, Bob notes that evidence alone never creates fundamental change—it gets reinterpreted within existing frameworks until organisations become ready to see things differently. This readiness comes from organisational stress and cognitive dissonance, not from logical argument.

The Readiness Challenge: Why Most People Don’t Engage

Bob sees the general lack of engagement with his writing as corroboration for Gallup’s data on employee engagement—few are yet ready to own improvement efforts and their own motivation. Most people remain trapped in patterns where they expect solutions to be provided rather than taking responsibility for their own transformation.

This explains why his therapeutic approach focuses on creating conditions for readiness rather than trying to convince people with evidence or argument. Until someone genuinely wants to change, all the insights in the world won’t help them.

Why His Critique Resonates

Bob’s perspective resonates with many practitioners experiencing Agile fatigue because he articulates what they feel but struggle to express. His recent post ‘How We Broke 40 Million Developers‘ struck a chord by describing how modern practices often feel performative rather than productive.

The Bottom Line: Qualified by Experience and Understanding

Is Bob qualified to comment on Agile (and the alternatives)? Absolutely. His five+ decades in software development, his demonstrable involvement in creating pre-Agile practices, his experience sucessfully founding and running the first 100% Agile software house in Europe, and his deep work in organisational psychology and change give him a unique vantage point.

His perspective is clearly informed by his therapeutic practice and his promotion of alternative approaches. And his core challenge remains valid: after more than 20 years of Agile domination, are we better at attending to people’s needs? Are users getting products and services that genuinely serve them better?

The Therapeutic Difference

What makes Bob’s voice distinctive isn’t just his comments on Agile—it’s his deep insights and understanding of how organisational change actually works. His therapeutic approach recognises that transformation happens through relationships and readiness, not through evidence and argument. Organisations change when they’re ready to see themselves differently, not when they’re presented with compelling data about their dysfunction.

This insight challenges the entire ‘evidence-based’ approach to organisational improvement. Bob suggests that meaningful change happens through shifts in readiness and perspective, with evidence becoming compelling only after those shifts occur, not before.

Whether you see Bob as a wise elder statesman or a contrarian voice, his decades of experience and unique therapeutic perspective offer valuable insights into why so many development efforts and ‘Agile transformation’ efforts fail—and what might work better.

Of course, you have to have the motivation to do better for any of Bob’s insights to be of any help to you.

Further Reading

Argyris, C. (1990). Overcoming organizational defenses: Facilitating organizational learning. Allyn & Bacon.

Bridges, W. (2004). Managing transitions: Making the most of change (2nd ed.). Da Capo Press.

Festinger, L. (1957). A theory of cognitive dissonance. Stanford University Press.

Kuhn, T. S. (1962). The structure of scientific revolutions. University of Chicago Press.

Marshall, R. W. (2019). Hearts over diamonds: Serving business and society through organisational psychotherapy. Leanpub.

Marshall, R. W. (2021a). Memeology: Surfacing and reflecting on the organisation’s collective assumptions and beliefs. Leanpub.

Marshall, R. W. (2021b). Quintessence: An acme for highly effective software development organisations. Leanpub.

Marshall, R. W. (2012, November 16). OrgCogDiss. Think Different. Retrieved from https://flowchainsensei.wordpress.com/2012/11/16/orgcogdiss/

Rogers, C. R. (1951). Client-centered therapy: Its current practice, implications, and theory. Houghton Mifflin.

Seligman, M. E. P. (2011). Flourish: A visionary new understanding of happiness and well-being. Free Press.


Bob Marshall blogs at Think Different and his books on Organisational Psychotherapy, Memeology, and Quintessence are available through Leanpub.

The Agile Paradox: Why Developers Can’t Find a Way Out

How management mandates create the very problem they’re meant to solve


I’ve had the same conversation dozens of times over the past few years. A developer, usually over coffee or in the quiet corner of a conference, leans in and says something like: ‘I’m so tired of Agile. The ceremonies feel meaningless, the estimates are fiction, and we’re not actually being agile at all. Isn’t there something better?’

The frustration is real, and it’s widespread. But here’s the kicker: when I ask what alternative they’d prefer, the conversation usually stalls. Not because developers lack ideas—they have plenty—but because they’ve internalised a harsh truth. In most organisations, no alternative to Agile is ‘viable’ precisely because management has mandated Agile as the solution.

This creates a perfect catch-22 that keeps teams trapped in approaches that often aren’t working for them.

The Agile Promise vs Reality

Let’s be clear: Agile methodologies weren’t born from management consultants or process enthusiasts. They emerged from developers who were frustrated with rigid, documentation-heavy approaches that seemed designed to prevent software from being built. The original Agile Manifesto was a rebellion against exactly the kind of top-down mandate we see today.

But somewhere along the way, Agile became the thing it was meant to replace. Instead of ‘individuals and interactions over processes and tools’, we got Scrum Masters obsessing over story point velocity. Instead of ‘responding to change over following a plan’, we got sprint commitments treated as unbreakable contracts.

The irony is thick.

The Viability Trap

Here’s where the organisational dynamics get truly perverse. When management declares Agile as the standard approach, they’re not just choosing a process—they’re making it the only process that can succeed within the organisational context.

Consider what ‘viable’ means in a corporate environment:

  • Resource allocation: Only Agile-compatible roles get budget approval
  • Tool selection: The company invests in Jira, Azure DevOps, or other Agile-focused platforms
  • Performance metrics: Success is measured in story points, sprint velocity, and burndown charts
  • Career advancement: Understanding Agile ceremonies becomes a prerequisite for leadership roles
  • Vendor relationships: Third-party teams and consultants are selected based on Agile experience

In this ecosystem, suggesting an alternative approach isn’t just a process change—it’s asking the organisation to abandon significant investments and restructure fundamental management assumptions about how work gets measured and managed.

The Innovation Stifling Effect

The mandate creates a particularly insidious problem: it prevents the organic experimentation that led to Agile in the first place. The most effective development approaches often emerge from teams trying to solve specific problems in specific contexts. But when there’s only one ‘approved’ way to work, this natural evolution stops.

I’ve seen teams that would benefit enormously from approaches like:

  • Kanban-style continuous flow for maintenance-heavy projects
  • Shape Up-style cycles for product development with unclear requirements
  • Lean startup approaches for experimental features
  • Traditional waterfall for compliance-heavy or well-understood domains
  • Quintessence for a total, yet incremental and self-paced overhaul of shared assumptions and beliefs about the very nature of work

But suggesting any of these becomes an uphill battle against established process, tooling, and metrics—not because they wouldn’t work better, but because the organisation has made them unviable by design (and mandate).

The Hidden Costs

This viability trap creates costs that rarely show up in sprint retrospectives:

Developer burnout increases when people feel trapped in ineffective processes they can’t change. The psychological impact of being forced to participate in ceremonies you believe are wasteful is significant.

Innovation slows when teams spend more energy navigating process requirements than solving actual problems. Every stand-up spent discussing why estimates were wrong is time not spent making the product better.

Talent retention suffers as experienced developers seek environments where they have more autonomy over how they work. The best developers often have options, and rigid process adherence isn’t typically what attracts them (understatement!).

Breaking the Cycle

So how do we break out of this trap? Solution suggest we recognise that viability of alternatives is as much about the organisation as it is about development practices.

Start with principles, not processes. Instead of mandating specific ceremonies or frameworks, organisations might choose to focus on outcomes: shipped software, customer satisfaction, team health. Let teams experiment with how they achieve these goals.

Measure what matters. Story point velocity tells you very little about whether you’re building the right thing or building it well. Focus on metrics that actually correlate with business success and the needs of (all the) Folks That Matter™.

Create safe spaces for experimentation. Allow teams to try different approaches for specific projects or timeframes. Make it clear that experiments are encouraged, and not career-limiting moves.

Invest in tooling flexibility. Choose platforms and tools that can support multiple approaches rather than locking into Agile-specific solutions (bin JIRA)

Most importantly, recognise that context matters. The approach that works for a startup building an MVP is different from what works for a team maintaining critical infrastructure. One size fits all is exactly the kind of thinking that led to Agile rebellion in the first place.

The Path Forward

The developers crying out for alternatives aren’t anti-process or anti-collaboration. They’re pro-effectiveness. They want to build great software and have positive working relationships with their colleagues. They’re frustrated because they can see that the current approach isn’t achieving those goals, but they’re powerless to change it.

The path forward invites courage from both developers and management.

Until then, we’ll continue to have conversations in coffee shop corners about how things could be better, whilst the next sprint planning meeting looms on the calendar.

The real tragedy isn’t that Agile doesn’t work—it’s that we’ve created organisational structures that prevent us from discovering what would work better. And that’s the most un-agile thing of all.


What alternatives have you wanted to try but couldn’t? How has your organisation balanced consistency in the way the work works with team autonomy? The conversation continues in the comments and, hopefully, in conference room discussions everywhere.

Further Reading

Abrahamsson, P., Salo, O., Ronkainen, J., & Warsta, J. (2002). Agile software development methods: Review and analysis (Technical Report No. 478). VTT Publications.

Anderson, D. J. (2010). Kanban: Successful evolutionary change for your technology business. Blue Hole Press.

Boehm, B., & Turner, R. (2003). Balancing Agility and discipline: A guide for the perplexed. Addison-Wesley Professional.

British Computer Society. (2023, November 7). The uncomfortable truth about Agile. https://www.bcs.org/articles-opinion-and-research/the-uncomfortable-truth-about-agile/

Cockburn, A. (2006). Agile software development: The cooperative game (2nd ed.). Addison-Wesley Professional.

Dalmijn, M. (2020, May 6). Basecamp’s Shape Up: How different is it really from Scrum? Serious Scrum. https://medium.com/serious-scrum/basecamps-shape-up-how-different-is-it-really-from-scrum-c0298f124333

DeMarco, T., & Lister, T. (2013). Peopleware: Productive projects and teams (3rd ed.). Addison-Wesley Professional.

Hunt, A. (2015, December 20). The failure of Agile. Atlantic Monthly. https://blog.toolshed.com/2015/05/the-failure-of-agile.html

Noble, J., & Biddle, R. (2018). Back to the future: Origins and directions of the “Agile Manifesto”—views of the originators. Journal of Software Engineering Research and Development, 6(1), Article 15. https://doi.org/10.1186/s40411-018-0059-z

Poppendieck, M., & Poppendieck, T. (2003). Lean software development: An Agile toolkit. Addison-Wesley Professional.

Serrador, P., & Pinto, J. K. (2015). Does Agile work? A quantitative analysis of Agile project success. International Journal of Project Management, 33(5), 1040-1051.

Singer, R. (2019). Shape Up: Stop running in circles and ship work that matters. Basecamp. https://basecamp.com/shapeup

Thomas, D. (2019). Agile is dead (long live Agility). In Proceedings of the 2019 ACM SIGPLAN Symposium on Scala (pp. 1-1).

The Agile Manifesto: Rearranging Deck Chairs While Five Dragons Burn Everything Down

Why the ‘Sound’ Principles Miss the Dragons That Actually Kill Software Projects

Image

The Agile Manifesto isn’t wrong, per se—it’s addressing the wrong problems entirely. And that makes it tragically inadequate.

For over two decades, ‘progressive’ software teams have been meticulously implementing sprints, standups, and retrospectives whilst the real dragons have been systematically destroying their organisations from within. The manifesto’s principles aren’t incorrect; they’re just rearranging deck chairs on the Titanic whilst it sinks around them.

The four values and twelve principles address surface symptoms of dysfunction whilst completely ignoring the deep systemic diseases that kill software projects. It’s treating a patient’s cough whilst missing the lung cancer—technically sound advice that’s spectacularly missing the point.

The Real Dragons: What Actually Destroys Software Teams

Whilst we’ve been optimising sprint ceremonies and customer feedback loops, five ancient dragons have been spectacularly burning down software development and tech business effectiveness:

Dragon : Human Motivation Death Spiral
Dragon : Dysfunctional Relationships That Poison Everything
Dragon : Shared Delusions and Toxic Assumptions
Dragon : The Management Conundrum—Questioning the Entire Edifice
Dragon : Opinioneering—The Ethics of Belief Violated

These aren’t process problems or communication hiccups. They’re existential threats that turn the most well-intentioned agile practices into elaborate theatre whilst real work grinds to a halt. And the manifesto? It tiptoes around these dragons like they don’t exist.

Dragon : The Motivation Apocalypse

‘Individuals and interactions over processes and tools’ sounds inspiring until you realise that your individuals are fundamentally unmotivated to do good work. The manifesto assumes that people care—but what happens when they don’t?

The real productivity killer isn’t bad processes; it’s developers who have mentally checked out because:

  • They’re working on problems they find meaningless
  • Their contributions are invisible or undervalued
  • They have no autonomy over how they solve problems
  • The work provides no sense of mastery or purpose
  • They’re trapped in roles that don’t match their strengths

You can have the most collaborative, customer-focused, change-responsive team in the world, but if your developers are quietly doing the minimum to avoid getting fired, your velocity will crater regardless of your methodology.

The manifesto talks about valuing individuals but offers zero framework for understanding what actually motivates people to do their best work. It’s having a sports philosophy that emphasises teamwork whilst ignoring whether the players actually want to win the game. How do you optimise ‘individuals and interactions’ when your people have checked out?

Dragon : Relationship Toxicity That Spreads Like Cancer

‘Customer collaboration over contract negotiation’ assumes that collaboration is even possible—but what happens when your team relationships are fundamentally dysfunctional?

The real collaboration killers that the manifesto ignores entirely:

  • Trust deficits: When team members assume bad faith in every interaction
  • Ego warfare: When technical discussions become personal attacks on competence
  • Passive aggression: When surface civility masks deep resentment and sabotage
  • Fear: When people are afraid to admit mistakes or ask questions
  • Status games: When helping others succeed feels like personal failure

You hold all the retrospectives you want, but if your team dynamics are toxic, every agile practice becomes a new battlefield. Sprint planning turns into blame assignment. Code reviews become character assassination. Customer feedback becomes ammunition for internal warfare.

The manifesto’s collaboration principles are useless when the fundamental relationships are broken. It’s having marriage counselling techniques for couples who actively hate each other—technically correct advice that misses the deeper poison. How do you collaborate when trust has been destroyed? What good are retrospectives when people are actively sabotaging each other?

Dragon : Shared Delusions That Doom Everything

‘Working software over comprehensive documentation’ sounds pragmatic until you realise your team is operating under completely different assumptions about what ‘working’ means, what the software does, and how success is measured. But what happens when your team shares fundamental delusions about reality?

The productivity apocalypse happens when teams share fundamental delusions:

  • Reality distortion: Believing their product is simpler/better/faster than it actually is
  • Capability myths: Assuming they can deliver impossible timelines with current resources
  • Quality blindness: Thinking ‘works on my machine’ equals production-ready
  • User fiction: Building for imaginary users with imaginary needs
  • Technical debt denial: Pretending that cutting corners won’t compound into disaster

These aren’t communication problems that better customer collaboration can solve—they’re shared cognitive failures that make all collaboration worse. When your entire team believes something that’s factually wrong, more interaction just spreads the delusion faster.

The manifesto assumes that teams accurately assess their situation and respond appropriately. But when their shared mental models are fundamentally broken? All the adaptive planning in the world won’t help if you’re adapting based on fiction.

Dragon : The Management Conundrum—Why the Entire Edifice Is Suspect

‘Responding to change over following a plan’ sounds flexible, but let’s ask the deeper question: Why do we have management at all?

The manifesto takes management as a given and tries to optimise around it. But what if the entire concept of management—people whose job is to direct other people’s work without doing the work themselves—is a fundamental problem?

Consider what management actually does in most software organisations:

  • Creates artificial hierarchies that slow down decision-making
  • Adds communication layers that distort information as it flows up and down
  • Optimises for command and control rather than effectiveness
  • Makes decisions based on PowerPoint and opinion rather than evidence
  • Treats humans like interchangeable resources to be allocated and reallocated

The devastating realisation is that management in software development is pure overhead that actively impedes the work. Managers who:

  • Haven’t written code in years (or ever) making technical decisions
  • Set timelines based on business commitments rather than reality
  • Reorganise teams mid-project because a consultant recommended ‘matrix management’ or some such
  • Measure productivity by story points rather than needs attended to (or met)
  • Translate clear customer needs into incomprehensible requirements documents

What value does this actually add? Why do we have people who don’t understand the work making decisions about the work? What if every management layer is just expensive interference?

The right number of managers for software teams is zero. The entire edifice of management—the org charts, the performance reviews, the resource allocation meetings—is elaborate theatre that gets in the way of people solving problems.

Productive software teams operate more like research labs or craftsman guilds: self-organising groups of experts who coordinate directly with each other and with the people who use their work. No sprint masters, no product owners, no engineering managers—just competent people working together to solve problems.

The manifesto’s principles assume management exists and try to make it less harmful. But they never question whether it has any value at all.

Dragon : Opinioneering—The Ethics of Belief Violated

Here’s the dragon that the manifesto not only ignores but actually enables: the epidemic of strong opinions held without sufficient evidence.

William Kingdon Clifford wrote in 1877 that

‘it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence’
(Clifford, 1877).

In software development, we’ve created an entire culture that violates this ethical principle daily through systematic opinioneering:

Technical Opinioneering: Teams adopting microservices because they’re trendy, not because they solve actual problems. Choosing React over Vue because it ‘feels’ better. Implementing event sourcing because it sounds sophisticated. Strong architectural opinions based on blog posts rather than deep experience with the trade-offs.

Process Opinioneering: Cargo cult agile practices copied from other companies without understanding why they worked there. Daily standups that serve no purpose except ‘that’s what agile teams do.’ Retrospectives that generate the same insights every sprint because the team has strong opinions about process improvement but no evidence about what actually works.

Business Opinioneering: Product decisions based on what the CEO likes rather than what users require. Feature priorities set by whoever argues most passionately rather than data about user behaviour. Strategic technology choices based on industry buzz rather than careful analysis of alternatives.

Cultural Opinioneering: Beliefs about remote work, hiring practices, team structure, and development methodologies based on what sounds right rather than careful observation of results.

The manifesto makes this worse by promoting ‘individuals and interactions over processes and tools’ without any framework for distinguishing between evidence-based insights and opinion-based groupthink. It encourages teams to trust their collective judgement without asking whether that judgement is grounded in sufficient evidence. But what happens when the collective judgement is confidently wrong? How do you distinguish expertise from persuasive ignorance?

When opinioneering dominates, you get teams that are very confident about practices that don’t work, technologies that aren’t suitable, and processes that waste enormous amounts of time. Everyone feels like they’re making thoughtful decisions, but they’re sharing unfounded beliefs dressed up as expertise.

The Deeper Problem: Dysfunctional Shared Assumptions and Beliefs

The five dragons aren’t just symptoms—they’re manifestations of something deeper. Software development organisations operate under shared assumptions and beliefs that make effectiveness impossible, and the Agile Manifesto doesn’t even acknowledge this fundamental layer exists.

My work in Quintessence provides the missing framework for understanding why agile practices fail so consistently. The core insight is that organisational effectiveness is fundamentally a function of collective mindset:

Organisational effectiveness = f(collective mindset)

I demonstrate that every organisation operates within a “memeplex“—a set of interlocking assumptions and beliefs about work, people, and how organisations function. These beliefs reinforce each other so strongly that changing one belief causes the others to tighten their grip to preserve the whole memeplex.

This explains why agile transformations consistently fail. Teams implement new ceremonies whilst maintaining the underlying assumptions that created their problems in the first place. They adopt standups and retrospectives whilst still believing people are motivated, relationships are authentic, management adds value, and software is always the solution.

Consider the dysfunctional assumptions that pervade conventional software development:

About People: Most organisations and their management operate under “Theory X” assumptions—people are naturally lazy, require external motivation, need oversight to be productive, and will shirk responsibility without means to enforce accountability. These beliefs create the very motivation problems they claim to address.

About Relationships: Conventional thinking treats relationships as transactional. Competition drives performance. Hierarchy creates order. Control prevents chaos. Personal connections are “unprofessional.” These assumptions poison the collaboration that agile practices supposedly enable.

About Work: Software is the solution to every problem. Activity indicates value. Utilisation (of eg workers) drives productivity. Efficiency trumps effectiveness. Busyness proves contribution. These beliefs create the delusions that make teams confidently ineffective.

About Management: Complex work requires coordination. Coordination requires hierarchy. Hierarchy requires managers. Managers add value through oversight and direction. These assumptions create the parasitic layers that impede the very work they claim to optimise.

About Knowledge: Strong opinions indicate expertise. Confidence signals competence. Popular practices are best practices. Best practices are desirable. Industry trends predict future success. These beliefs create the opinioneering that replaces evidence with folklore.

Quintessence (Marshall, 2021) shows how “quintessential organisations” operate under completely different assumptions:

  • People find joy in meaningful work and naturally collaborate when conditions support it
  • Relationships based on mutual care and shared purpose are the foundation of effectiveness
  • Work is play when aligned with purpose and human flourishing
  • Management is unnecessary parasitism—people doing the work make the decisions about the work
  • Beliefs must be proportioned to evidence and grounded in serving real human needs

The Agile Manifesto can’t solve problems created by fundamental belief systems because it doesn’t even acknowledge these belief systems exist. It treats symptoms whilst leaving the disease untouched. Teams optimise ceremonies whilst operating under assumptions that guarantee continued dysfunction.

This is why the Qunitessence approach differs so radically from ‘Agile’ approaches. Instead of implementing new practices, quintessential organisations examine their collective assumptions and beliefs. Instead of optimising processes, they transform their collective mindset. Instead of rearranging deck chairs, they address the fundamental reasons the ship is sinking.

The Manifesto’s Tragic Blindness

Here’s what makes the Agile Manifesto so inadequate: it assumes the Five Dragons don’t exist. It offers principles for teams that are motivated, functional, reality-based, self-managing, and evidence-driven—but most software teams are none of these things.

The manifesto treats symptoms whilst ignoring diseases:

  • It optimises collaboration without addressing what makes collaboration impossible
  • It values individuals without confronting what demotivates them
  • It promotes adaptation without recognising what prevents teams from seeing their shared assumptions and beliefs clearly
  • It assumes management adds value rather than questioning whether management has any value at all
  • It encourages collective decision-making without any framework for leveraging evidence-based beliefs

This isn’t a failure of execution—it’s a failure of diagnosis. The manifesto identified the wrong problems and thus prescribed the wrong solutions.

Tom Gilb’s Devastating Assessment: The Manifesto Is Fundamentally Fuzzy

Software engineering pioneer Tom Gilb delivers the most damning critique of the Agile Manifesto: its principles are

‘so fuzzy that I am sure no two people, and no two manifesto signers, understand any one of them identically’

(Gilb, 2005).

This fuzziness isn’t accidental—it’s structural. The manifesto was created by ‘far too many “coders at heart” who negotiated the Manifesto’ without

‘understanding of the notion of delivering measurable and useful stakeholder value’

(Gilb, 2005).

The result is a manifesto that sounds profound but provides no actionable guidance for success in product development.

Gilb’s critique exposes the manifesto’s fundamental flaw: it optimises for developer comfort rather than stakeholder value. The principles read like a programmer’s wish list—less documentation, more flexibility, fewer constraints—rather than a framework for delivering measurable results to people who actually need the software.

This explains why teams can religiously follow agile practices whilst consistently failing to deliver against folks’ needs. The manifesto’s principles are so vague that any team can claim to be following them whilst doing whatever they want. ‘Working software over comprehensive documentation’ means anything you want it to mean. ‘Responding to change over following a plan’ provides zero guidance on how to respond or what changes matter. (Cf. Quantification)

How do you measure success when the principles themselves are unmeasurable? What happens when everyone can be ‘agile’ whilst accomplishing nothing? How do you argue against a methodology that can’t be proven wrong?

The manifesto’s fuzziness enables the very dragons it claims to solve. Opinioneering thrives when principles are too vague to be proven wrong. Management parasitism flourishes when success metrics are unquantified Shared delusions multiply when ‘working software’ has no operational definition.

Gilb’s assessment reveals why the manifesto has persisted despite its irrelevance: it’s comfortable nonsense that threatens no one and demands nothing specific. Teams can feel enlightened whilst accomplishing nothing meaningful for stakeholders.

Stakeholder Value vs. All the Needs of All the Folks That Matter™

Gilb’s critique centres on ‘delivering measurable and useful stakeholder value’—but this phrase itself illuminates a deeper problem with how we think about software development success. ‘Stakeholder value’ sounds corporate and abstract, like something you’d find in a business school textbook or an MBA course (MBA – maybe best avoided – Mintzberg)

What we’re really talking about is simpler, less corporate and more human: serving all the needs of all the Folks That Matter™.

The Folks That Matter aren’t abstract ‘stakeholders’—they’re real people trying to get real things done:

  • The nurse trying to access patient records during a medical emergency
  • The small business owner trying to process payroll before Friday
  • The student trying to submit an assignment before the deadline
  • The elderly person trying to video call their grandchildren
  • The developer trying to understand why the build is broken again

When software fails these people, it doesn’t matter how perfectly agile your process was. When the nurse can’t access records, your retrospectives are irrelevant. When the payroll system crashes, your customer collaboration techniques are meaningless. When the build and smoke takes 30+ minutes, your adaptive planning is useless.

The Agile Manifesto’s developer-centric worldview treats these people as distant abstractions—’users’ and ‘customers’ and ‘stakeholders.’ But they’re not abstractions. They’re the Folks That Matter™, and their needs are the only reason software development exists.

The manifesto’s principles consistently prioritise developer preferences over the requirements of the Folks That Matter™. ‘Working software over comprehensive documentation’ sounds reasonable until the Folks That Matter™ require understanding of how to use the software. ‘Individuals and interactions over processes and tools’ sounds collaborative until the Folks That Matter™ require consistent, reliable results from those interactions.

This isn’t about being anti-developer—it’s about recognising that serving the Folks That Matter™ is the entire point. The manifesto has it backwards: instead of asking ‘How do we make development more comfortable for developers?’ we might ask ‘How do we reliably serve all the requirements of all the Folks That Matter™?’ That question changes everything. It makes motivation obvious—you’re solving real problems for real people. It makes relationship health essential—toxic teams can’t serve others effectively. It makes reality contact mandatory—delusions about quality hurt real people. It makes evidence-based decisions critical—opinions don’t serve the Folks That Matter™; results do.

Most importantly, it makes management’s value proposition clear: Do you help us serve the Folks That Matter™ better, or do you get in the way? If the answer is ‘get in the way,’ then management becomes obviously a dysfunction.

What Actually Addresses the Dragons

If we want to improve software development effectiveness, we address the real dragons:

Address Motivation: Create work that people actually care about. Give developers autonomy, mastery, and purpose. Match people to problems they find meaningful. Make contributions visible and valued.

Heal Toxic Relationships: Build psychological safety where people can be vulnerable about mistakes. Address ego and status games directly. Create systems where helping others succeed feels like personal victory.

Resolve Shared Delusions: Implement feedback loops that invite contact with reality. Measure what actually matters. Create cultures where surfacing uncomfortable truths is rewarded rather than punished.

Transform Management Entirely: Experiment with self-organising teams. Distribute decision-making authority to where expertise actually lives. Eliminate layers between problems and problem-solvers. Measure needs met, not management theatre.

Counter Evidence-Free Beliefs: Institute a culture where strong opinions require strong evidence. Enable and encourage teams to articulate the assumptions behind their practices. Reward changing your mind based on new data. Excise confident ignorance.

These aren’t process improvements or methodology tweaks—they’re organisational transformation efforts that require fundamentally different approaches than the manifesto suggests.

Beyond Agile: Addressing the Real Problems

The future of software development effectiveness isn’t in better sprint planning or more customer feedback. It’s in organisational structures that:

  • Align individual motivation with real needs
  • Create relationships based on trust
  • Enable contact with reality at every level
  • Eliminate management as dysfunctional
  • Ground all beliefs in sufficient evidence

These are the 10x improvements hiding in plain sight—not in our next retrospective, but in our next conversation about why people don’t care about their work. Not in our customer collaboration techniques, but in questioning whether we have managers at all. Not in our planning processes, but in demanding evidence for every strong opinion.

Conclusion: The Problems We Were Addressing All Along

The Agile Manifesto succeeded in solving the surface developer bugbears of 2001: heavyweight processes and excessive documentation. But it completely missed the deeper organisational and human issues that determine whether software development succeeds or fails.

The manifesto’s principles aren’t wrong—they’re just irrelevant to the real challenges. Whilst we’ve been perfecting our agile practices, the dragons of motivation, relationships, shared delusions, management being dysfunctional, and opinioneering have been systematically destroying software development from within.

Is it time to stop optimising team ceremonies and start addressing the real problems? Creating organisations where people are motivated to do great work, relationships enable rather than sabotage collaboration, shared assumptions are grounded in reality, traditional management no longer exists, and beliefs are proportioned to evidence.

But ask yourself: Does your organisation address any of these fundamental issues? Are you optimising ceremonies whilst your dragons run wild? What would happen if you stopped rearranging deck chairs and started questioning why people don’t care about their work?

Because no amount of process optimisation will save a team where people don’t care, can’t trust each other, believe comfortable lies, are managed by people who add negative value, and make decisions based on opinions rather than evidence.

The dragons are real, and they’re winning. Are we finally ready to address them?

Further Reading

Beck, K., Beedle, M., van Bennekum, A., Cockburn, A., Cunningham, W., Fowler, M., … & Thomas, D. (2001). Manifesto for Agile Software Development. Retrieved from https://agilemanifesto.org/

Clifford, W. K. (1877). The ethics of belief. Contemporary Review, 29, 289-309.

Gilb, T. (2005). Competitive Engineering: A Handbook for Systems Engineering, Requirements Engineering, and Software Engineering Using Planguage. Butterworth-Heinemann.

Gilb, T. (2017). How well does the Agile Manifesto align with principles that lead to success in product development? Retrieved from https://www.gilb.com/blog/how-well-does-the-agile-manifesto-align-with-principles-that-lead-to-success-in-product-development

Marshall, R.W. (2021). *Quintessence: An Acme for Software Development Organisations. *[online] leanpub.com. Falling Blossoms (LeanPub). Available at: https://leanpub.com/quintessence/ [Accessed 15 Jun 2022].

Praising the CRC Card

For the developers who never got to hold one

Image

If you started your career after 2010, you probably never encountered a CRC card. If you’re a seasoned developer who came up through Rails tutorials, React bootcamps, or cloud-native microservices, you likely went straight from user stories to code without stopping at index cards. This isn’t your fault. By the time you arrived, the industry had already moved on.

But something was lost in that transition, and it might be valuable for you to experience it.

What You Missed

A CRC card is exactly what it sounds like: a Class-Responsibility-Collaborator design written on a physical index card. One class per card. The class name at the top, its responsibilities listed on the left, and the other classes it works with noted on the right. Simple. Physical. Throwaway.

The technique was developed by Ward Cunningham and Kent Beck in the late 1980s, originally emerging from Cunningham’s work with HyperCard documentation systems. They introduced CRC cards as a teaching tool, but the approach was embraced by practitioners following ideas like Peter Coad’s object-oriented analysis, design, and programming (OOA/D/P) framework. Peter Coad (with Ed Yourdon) wrote about a unified approach to building software that matched how humans naturally think about problems. CRC cards are a tool for translating business domain concepts directly into software design, without getting lost in technical abstractions.

The magic wasn’t in the format—it was in what the format forced you to do.

The Experience

Picture this: You and your teammates sitting around a conference table covered in index cards. Someone suggests a new class. They grab a blank card and write ‘ShoppingCart’ at the top. ‘What should it do?’ someone asks. ‘Add items, remove items, calculate totals, apply promotions,’ comes the reply. Those go in the responsibilities column. ‘What does it need to work with?’ Another pause. ‘It needs Product objects to know what’s being added, a Customer for personalised pricing, maybe a Promotion when discounts apply.’ Those become collaborators.

But here’s where it gets interesting. The card is small. Really small. If you’re writing tiny text to fit more responsibilities, someone notices. If you have fifteen collaborators, the card looks messy. The physical constraint was a design constraint. It whispered: ‘Keep it simple.’

Aside: In Javelin, we also advise keeping all methods to no more than “Five Lines of Code”. And Statements of Purpose to 25 words or less.

The Seduction

Somewhere in the 2000s, we got seduced. UML tools (yak) promised prettier diagrams. Digital whiteboards now offer infinite canvas space. Collaborative software lets us design asynchronously across time zones. We can version control our designs! Track changes! Generate code from diagrams!

We told ourselves this was progress. We retrofitted justifications: ‘Modern systems are too complex for index cards.’ ‘Remote teams need digital tools.’ ‘Physical methods don’t scale.’

But these were lame excuses, not good reasons.

The truth is simpler and more embarrassing: we abandoned CRC cards because they felt primitive. Index cards seemed amateur next to sophisticated UML tools and enterprise architecture platforms. We confused the sophistication of our tools with the sophistication of our thinking.

What We Actually Lost

The constraint was the feature. An index card can’t hold a God class. It can’t accommodate a class with dozens of responsibilities or collaborators. But more importantly, it forced you to think in domain terms, not implementation terms. When you’re limited to an index card, you can’t hide behind technical abstractions like ‘DataProcessor’ or ‘ValidationManager.’ You have to name things that represent actual concepts in the problem domain – things a business person would recognise. The physical limitation forced good design decisions and domain-focused thinking before you had time to rationalise technical complexity.

Throwaway thinking was powerful. When your design lived on index cards, you could literally throw it away and start over. No one was attached to the beautiful diagram they’d spent hours or days perfecting. The design was disposable, which made experimentation safe.

Tactile collaboration was different. There’s something unique about physically moving cards around a table, stacking them, pointing at them, sliding one toward a teammate. Digital tools simulate this poorly. Clicking and dragging isn’t the same as picking up a card and handing it to someone.

Forced focus was valuable. You couldn’t switch to Slack during a CRC card session. You couldn’t zoom in on implementation details. The cards kept you at the right level of abstraction—not so high that you were hand-waving, not so low that you were bikeshedding variable names.

The Ratchet Effect

Here’s what makes this particularly tragic: once the industry moved to digital tools, it became genuinely harder to go back. Try suggesting index cards in a design meeting today. You’ll get polite smiles and concerned looks. Not because the method doesn’t work, but because the ecosystem has moved backwards. The new developers have never seen it done. The tooling assumes digital. The ‘best practices’ articles all recommend software solutions.

We created a ratchet effect where good practices became impossible to maintain not because they were inadequate, but because they felt outdated.

For Those Who Never Got the Chance

If you’re reading this as a developer who never used CRC cards, I want you to know: you were cheated, but not by your own choices. You came into an industry that had already forgotten one of its own most useful practices. You learned the tools that were available when you arrived.

But you also inherited the complexity that came from abandoning constraints. You’ve probably spent hours in architecture meetings where the design sprawled across infinite digital canvases, where classes accumulated responsibilities because the tools could accommodate any amount of complexity, where the ease of adding ‘just one more connection’ led to systems that no one fully understood.

You’ve felt the pain of what we lost when we abandoned the constraint.

A Small Experiment

Next time you’re designing something new, try this: grab some actual index cards. Write one class per card. See how it feels when the physical constraint pushes back against your design. Notice what happens when throwing away a card costs nothing but keeping a complex design visible costs table space.

You might discover something we lost when we got sophisticated.

Do it because CRC cards were actually superior to modern digital tools for early design thinking. We didn’t outgrow them – we abandoned something better for something shinier.

Sometimes the simpler tool was better precisely because it was simpler.

The industry moves fast, and not everything we leave behind should have been abandoned. Some tools die not because they’re inadequate, but because they’re unfashionable. The CRC card was a casualty of progress that wasn’t progressive.

Further Reading

Beck, K., & Cunningham, W. (1989). A laboratory for teaching object-oriented thinking. SIGPLAN Notices, 24(10), 1-6.

Coad, P., & Yourdon, E. (1990). Object-oriented analysis. Yourdon Press.

Coad, P., & Yourdon, E. (1991). Object-oriented design. Yourdon Press.

Coad, P., North, D., & Mayfield, M. (1995). Object-oriented programming. Prentice Hall.

Coad, P., North, D., & Mayfield, M. (1996). Object models: Strategies, patterns, and applications (2nd ed.). Prentice Hall.

Wirfs-Brock, R., & McKean, A. (2003). Object design: Roles, responsibilities, and collaborations. Addison-Wesley.

Your Software Requirements Are Worthless

Every day, software teams burn millions of pounds building the wrong thing because they mistake fuzzy feelings and opinioneering for engineering specifications

Software teams continue writing requirements like ‘user-friendly’, ‘scalable’, and ‘high-performance’ as if these phrases mean anything concrete.

They don’t.

What they represent is ignorance (of quantification) disguised as intellectual laziness disguised as collaboration. When a product manager says an interface should be ‘intuitive’ and a developer nods in agreement, no communication has actually occurred. Both parties have simply agreed to postpone the hard work of thinking and talking until later—usually until users complain or products break.

The solution isn’t better communication workshops or more stakeholder alignment meetings. It’s operational definitions—the rigorous practice of quantifying every requirement so precisely that a computer could verify compliance.

What Are Operational Definitions?

An operational definition specifies exactly how to measure, observe, or identify something in terms that are meaningful to the Folks That Matter™. Instead of abstract concepts or assumptions, operational definitions state the precise criteria, procedures, or observable behaviours that determine whether something meets a standard—and why that standard creates value for those Folks That Matter™.

The term originates from scientific research, where researchers must ensure their experiments are replicable. Instead of saying a drug ‘improves patient outcomes’, researchers operationally define improvement as ‘a 15% reduction in Hamilton Depression Rating Scale scores measured by trained clinicians using the 17-item version at 6-week intervals, compared to baseline scores taken within 72 hours of treatment initiation, with measurements conducted between 9-11 AM in controlled clinical environments at 21°C ±2°C, amongst patients aged 18-65 with major depressive disorder diagnosed per DSM-5 criteria, excluding those with concurrent substance abuse or psychotic features’.

This example only scratches the surface—a complete operational definition would specify dozens more variables including exact clinician training protocols, inter-rater reliability requirements, patient positioning, statistical procedures, and missing data handling. This precision is what makes scientific breakthroughs reproducible and medical treatments safe.

The Software Development Challenge

Software teams constantly wrestle with ambiguous terms that everyone assumes they understand:

  • ‘This feature should be fast’
  • ‘The user interface needs to be intuitive’
  • ‘We need better code quality’
  • ‘This bug is critical’

These statements appear clear in conversation, but they’re loaded with subjective interpretations. What’s ‘fast’ to a backend engineer may be unacceptably slow to a mobile developer. ‘Intuitive’ means different things to designers, product managers, and end users.

Worse: these fuzzy requirements hide the real question—what specificaly do the Folks That Matter™ actually need?

How Operational Definitions Transform Software Teams

1. Connect Features to the Needs of the Folks That Matter™

Consider replacing ‘the API should be fast’ with an operational definition: ‘API responses return within 200ms for 95% of requests under normal load conditions, as measured by our monitoring system, enabling customer support agents to resolve inquiries 40% faster and increasing customer satisfaction scores by 15 points as measured on <date>.’

This eliminates guesswork, creates shared understanding across disciplines, and directly links technical decisions to the needs of the Folks That Matter™.

2. Turn Subjective Debates Into Objective Decisions

Operational definitions end pointless arguments about code quality. Stop debating whether code is ‘maintainable’. Define maintainability operationally:

  • Code coverage above 80% to reduce debugging time by 50%
  • Cyclomatic complexity below 10 per function to enable new team members to contribute within 2 weeks
  • No functions exceeding 50 lines to support 90% of feature requests completed within single sprint
  • All public APIs documented with examples to achieve zero external developer support tickets for basic integration

Each criterion ties directly to measurable benefits for the Folks That Matter™.

3. Accelerate Decision Making

With operationally defined acceptance criteria, teams spend less time in meetings clarifying requirements and more time attending to folks’ needs. Developers know exactly what ‘done’ looks like, and the Folks That Matter™ verify completion through measurable outcomes.

4. Bridge Cross-Functional Disciplines

Different roles think in different terms. Operational definitions create a common vocabulary focused on the needs of the Folks That Matter™:

  • Product: Transform ‘User-friendly’ into ‘Users complete the checkout flow within 3 steps, with less than 2% abandonment at each step, increasing conversion rates by 12% and generating £2M additional annual revenue
  • Design: Transform ‘Accessible’ into ‘Meets WCAG 2.1 AA standards as verified by automated testing and manual review, enabling compliance with federal accessibility requirements and expanding addressable market by 15%
  • Engineering: Transform ‘Scalable’ into ‘Handles 10x current load with response times under 500ms, supporting planned user growth without additional infrastructure investment for 18 months

5. Evolutionary Improvement

Operational definitions evolve as the needs of the Folks That Matter™ become clearer. Start with basic measurements, then refine scales of measure as you learn what truly drives value. A ‘fast’ system might initially mean ‘under 1 second response time’ but evolve into sophisticated performance profiles that optimise for different user contexts and business scenarios.

Real-World Implementation: Javelin’s QQO Framework

Some teams have already embraced this precision. Falling Blossoms’ Javelin process demonstrates operational definitions in practice through Quantified Quality Objectives (QQOs)—a systematic approach to transforming vague non-functional requirements into quasi or actual operational definitions.

Instead of accepting requirements like ‘the system should be reliable’ or ‘performance must be acceptable’, Javelin teams create detailed QQO matrices where every quality attribute gets operationally defined with:

  • Metric: Exact measurement method and scale
  • Current: Baseline performance (if known)
  • Best: Ideal target level
  • Worst: Minimum acceptable threshold
  • Planned: Realistic target for this release
  • Actual: Measured results for actively monitored QQOs
  • Milestone sequence: Numeric targets at specific dates/times throughout development

A Javelin team might operationally define ‘reliable’ as: ‘System availability measured monthly via automated uptime monitoring: 99.5% by March 1st (MVP launch), 99.7% by June 1st (full feature release), 99.9% by December 1st (enterprise rollout), with worst acceptable level never below 99.0% during any measurement period.’

This transforms the entire conversation. Instead of debating what ‘reliable enough’ means, teams focus on achievable targets, measurement infrastructure, and clear success criteria. QQO matrices grow organically as development progresses, following just-in-time elaboration of folks’ needs. Teams don’t over-specify requirements months in advance; they operationally define quality attributes exactly as needed for immediately upcoming development cycles.

This just-in-time approach prevents requirements from going stale whilst maintaining precision where it matters. A team might start with less than a dozen operationally defined QQOs for an MVP, then expand to hundreds as they approach production deployment and beyond—each new QQO addressing specific quality concerns as they become relevant to actual development work.

Toyota’s Product Development System (TPDS) demonstrates similar precision in manufacturing contexts through Set Based Concurrent Engineering (SBCE). Rather than committing to single design solutions early, Toyota teams define operational criteria for acceptable solutions—precise constraints for cost, performance, manufacturability, and quality. They then systematically eliminate design alternatives, at scheduled decision points, that fail to meet these quantified thresholds, converging on optimal solutions through measured criteria rather than subjective judgement.

Both Javelin’s QQOs and Toyota’s SBCE prove that operational definitions work at scale across industries—turning fuzzy requirements into systematic, measurable decision-making frameworks that deliver value to the Folks That Matter™.

Practical Examples in Software Development

User Story Acceptance Criteria

Before: ‘As a user, I want the search to be fast so I can find results quickly.’

After: ‘As a user, when I enter a search query, I should see results within 1 second for 95% of searches, with a loading indicator appearing within 100ms of pressing enter.’

Bug Priority Classification

Before: ‘This is a critical bug.’

After: ‘Priority 1 (Critical): Bug prevents core user workflow completion OR affects >50% of active users OR causes data loss OR creates security vulnerability.’

Code Review Standards

Before: ‘Code should be clean and well-documented.’

After: Operationally defined code quality standards with measurable criteria:

Documentation Requirements:

  • 100% of public APIs include docstrings with purpose, parameters, return values, exceptions, and working usage examples
  • Complex business logic (cyclomatic complexity >5) requires inline comments explaining the ‘why’, not the ‘what’
  • All configuration parameters documented with valid ranges, default values, and business impact of changes
  • Value to the Folks That Matter™: Reduces onboarding time for new developers from 4 weeks to 1.5 weeks, cuts external API integration support tickets by 80%

Code Structure Metrics:

  • Functions limited to 25 lines maximum (excluding docstrings and whitespace)
  • Cyclomatic complexity below 8 per function as measured by static analysis tools
  • Maximum nesting depth of 3 levels in any code block
  • No duplicate code blocks exceeding 6 lines (DRY principle enforced via automated detection)
  • Value to the Folks That Matter™: Reduces bug fix time by 60%, enables 95% of feature requests completed within single sprint

Naming and Clarity:

  • Variable names must be pronounceable and searchable (no abbreviations except industry-standard: id, url, http)
  • Boolean variables/functions use positive phrasing (isValid not isNotInvalid)
  • Class/function names describe behaviour, not implementation (PaymentProcessor not StripeHandler)
  • Value to the Folks That Matter™: Reduces code review time by 40%, decreases bug report resolution from 3 days to 8 hours average

Security and Reliability:

  • Zero hardcoded secrets, credentials, or environment-specific values in source code
  • All user inputs validated with explicit type checking and range validation
  • Error handling covers all failure modes with logging at appropriate levels
  • All database queries use parameterised statements (zero string concatenation)
  • Value to the Folks That Matter™: Eliminates 90% of security vulnerabilities, reduces production incidents by 75%

Testing Integration:

  • Every new function includes unit tests with >90% branch coverage
  • Integration points include contract tests verifying interface expectations
  • Performance-critical paths include benchmark tests with acceptable thresholds defined
  • Value to the Folks That Matter™: Reduces regression bugs by 85%, enables confident daily deployments

Review Process Metrics:

  • Code reviews completed within 4 business hours of submission
  • Maximum 2 review cycles before merge (initial review + addressing feedback)
  • Review comments focus on maintainability, security, and business logic—not style preferences
  • Value to the Folks That Matter™: Maintains development velocity whilst ensuring quality, reduces feature delivery time by 25%

Performance Requirements

Before: ‘The dashboard should load quickly.’

After: ‘Dashboard displays initial data within 2 seconds on 3G connection, with progressive loading of additional widgets completing within 5 seconds total.’

The Competitive Advantage

Teams that master operational definitions gain significant competitive advantages:

  • Faster delivery cycles from reduced requirement clarification—deploy features 30-50% faster than competitors
  • Higher quality output through measurable standards—reduce post-release defects by 60-80%
  • Improved confidence from the Folks That Matter™ from predictable, verifiable results—increase project approval rates and budget allocations
  • Reduced technical debt through well-defined standards—cut maintenance costs whilst enabling rapid feature development
  • Better team morale from decreased frustration and conflict—retain top talent and attract better candidates

Most importantly: organisations that operationally define their quality criteria can systematically out-deliver competitors who rely on subjective judgement.

Start Today

Choose one ambiguous term your team uses frequently and spend 30 minutes defining it operationally. Ask yourselves:

  1. What value does this QQO deliver to the Folks That Matter™?
  2. What specific, observable criteria determine if this value is achieved?
  3. What scale of measure will we use—percentage, time, count, ratio?
  4. How will we measure this, and how often?
  5. What does ‘good enough’ look like vs. ‘exceptional’ for the Folks That Matter™?

Aim for precision that drives satisfaction of folks’ needs, not perfection. Even rough operational definitions linked to the needs of the Folks That Matter™ provide more clarity than polished ambiguity.

Implementation Strategy

Start Small and Build Consensus

Begin by operationally defining one or two concepts that cause the most confusion in your team. Start with:

  • Definition of ‘done’ for user stories linked to specific value for the Folks That Matter™
  • Bug severity levels tied to business impact measures
  • Performance benchmarks connected to user experience goals
  • Code standards that enable measurable delivery improvements

Define Scales of Measure

Write operational definitions that specify not just the criteria, but the scale of measure—the unit and method of measurement. Include:

  • Measurement method: How you will measure (automated monitoring, user testing, code analysis)
  • Scale definition: Units of measure (response time in milliseconds, satisfaction score 1-10, defect rate per thousand lines)
  • Measurement infrastructure: Tools, systems, and processes needed
  • Frequency: How often measurements occur and when they’re reviewed
  • Connection to the Folks That Matter™: What business need each measurement serves

Evolve Based on Learning

Operational definitions evolve as you learn what truly drives meeting the needs of the Folks That Matter™. Start with basic measurements, then refine scales as you discover which metrics actually predict success. Regular retrospectives can examine not just whether definitions were met, but whether they satisfied the intended needs of the Folks That Matter™.

Document and Automate

Store operational definitions in accessible locations—team wikis, README files, or project documentation. Automate verification through CI/CD pipelines, monitoring dashboards, and testing frameworks wherever possible. The goal is measurement infrastructure that runs automatically and surfaces insights relevant to the needs of the Folks That Matter™.

Conclusion

Operational definitions represent a paradigm shift from ‘we all know what we mean’ to ‘we are crystal clear about what value we’re delivering to the Folks That Matter™’. In software development, where precision enables competitive advantage and the satisfaction of the needs of the Folks That Matter™ determines success, this shift separates organisations that struggle with scope creep and miscommunication from those that systematically out-deliver their competition.

Creating operational definitions pays dividends in reduced rework, faster delivery, happier teams, and measurable value for the Folks That Matter™. Most importantly, it transforms software development from a guessing game into a needs-meeting discipline—exactly what markets demand as digital transformation accelerates and user expectations rise.

Operational definitions aren’t just about better requirements. They’re about systematic competitive advantage through measurable satisfaction of the needs of the Folks That Matter™.

Take action: Pick one fuzzy requirement from your current sprint. Define it operationally in terms of specific needs of the Folks That Matter™. Watch how this precision changes every conversation your team has about priorities, trade-offs, and success.

Further Reading

American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). American Psychiatric Publishing.

Beck, K. (2000). Extreme programming explained: Embrace change. Addison-Wesley.

Cockburn, A. (2004). Crystal clear: A human-powered methodology for small teams. Addison-Wesley.

DeMarco, T. (1982). Controlling software projects: Management, measurement, and estimation. Yourdon Press.

DeMarco, T., & Lister, T. (2013). Peopleware: Productive projects and teams (3rd ed.). Addison-Wesley.

Falling Blossoms. (2006). Our Javelin™ process (Version 2.0a). Falling Blossoms.

Gilb, T. (1988). Principles of software engineering management. Addison-Wesley.

Gilb, T. (2005). Competitive engineering: A handbook for systems engineering management using Planguage. Butterworth-Heinemann.

Gilb, T., & Graham, D. (1993). Software inspection. Addison-Wesley.

Hamilton, M. (1960). A rating scale for depression. Journal of Neurology, Neurosurgery, and Psychiatry, 23(1), 56-62.

Kennedy, M. N., & Harmon, K. (2008). Ready, set, dominate: Implement Toyota’s set-based learning for developing products and nobody can catch you. Oaklea Press.

Morgan, J. M., & Liker, J. K. (2006). The Toyota product development system: Integrating people, process, and technology. Productivity Press.

Sobel, A. E., & Clarkson, M. R. (2002). Formal methods application: An empirical tale of software system development. IEEE Transactions on Software Engineering, 28(3), 308-320.

W3C Web Accessibility Initiative. (2018). Web content accessibility guidelines (WCAG) 2.1. World Wide Web Consortium.

Ward, A. C. (2007). Lean product and process development. Lean Enterprise Institute.

Weinberg, G. M. (1985). The secrets of consulting: A guide to giving and getting advice successfully. Dorset House.

Yourdon, E. (1997). Death march: The complete software developer’s guide to surviving ‘mission impossible’ projects. Prentice Hall.

The Human Factor: Why Psychology is Tech’s Most Undervalued Discipline

From cognitive biases to team dynamics, the psychological insights that could revolutionise how we build products, manage teams, run businesses and drive innovation

Silicon Valley has conquered machine learning, perfected continuous deployment, and built systems that serve billions. Yet for all its technical mastery, the tech industry repeatedly fails at something far more fundamental: understanding people.

The evidence is overwhelming. Digital transformations fail at rates between 70-95%, with an average failure rate of 87.5% (Bonnet, 2022). Software projects consistently run over budget and behind schedule, wasting £millions. Developer burnout has reached epidemic proportions. User adoption of new features remains stubbornly low despite e.g. sophisticated A/B testing.

The common thread? These aren’t technical failures—they’re human failures. Failures of communication, motivation, decision-making, relationships, and understanding what actually drives behaviour.

The Industry’s Psychological Blind Spot

Walk through any tech office and you’ll witness a fascinating paradox. Engineers who can optimise algorithms to microsecond precision struggle to understand why their perfectly logical user interface confuses customers. Engineering gurus who architect fault-tolerant distributed systems can’t figure out why their teams are demotivated. Product managers who obsess over conversion metrics completely miss the emotional journey that determines whether users actually adopt their features.

This isn’t incompetence—it’s a systematic blind spot. Technical education trains us to think in features, algorithms, and deterministic outcomes. We learn to eliminate variables, optimise for efficiency, and build predictable solutions. But humans are gloriously, frustratingly unpredictable.

The blind spot runs deeper than individual ignorance. There’s a cultural disdain for anything psychology-related (interesting in itself from a psychology perspective). Mention “team dynamics” in a planning meeting and watch the eye-rolls. Suggest that cognitive biases might be affecting architectural decisions and you’ll be dismissed as pushing tree-hugging, woke “soft skills” nonsense. The tech industry has convinced itself that psychology is touchy-feely therapy speak, irrelevant to the serious business of building software and running businesses.

This dismissal comes at a massive cost. When we ignore psychology, we build products that solve the wrong problems, create team environments that burn out our best people, and make flawed decisions based on biases we don’t even recognise.

The Data-Driven Case for Psychology

Ironically, one of history’s most influential systems thinkers understood psychology’s business value perfectly. W. Edwards Deming—the statistician whose principles revolutionised manufacturing quality and helped rebuild Japan’s post-war economy—made psychology one of the four pillars of his “System of Profound Knowledge”. And from his persepctive, the most important of the four.

Deming didn’t treat psychology as a nice-to-have add-on. He argued that managers must understand human nature, motivation, and behaviour to build effective ways of working. His famous insight that 94% of quality problems stem from systems and management—not worker incompetence—was fundamentally psychological. Yet tech management, which claims to worship data-driven decision making, has ignored these insights from one of the most successful data-driven thinkers in history.

Modern research backs up Deming’s intuition. Studies consistently show that psychological factors are among the strongest predictors of software project success:

Research on agile development teams found that human-related factors—quality of relationships, team capability, customer involvement, and team dynamics—are the critical success factors, far outweighing technical considerations (Barros et al., 2024).

Studies of developer performance demonstrate that emotional states directly impact problem-solving abilities, with “happy developers” significantly outperforming their stressed counterparts on analytical tasks (Graziotin et al., 2014).

Analysis of team effectiveness reveals that personality traits and interpersonal dynamics have measurable impacts on code quality, delivery timelines, and innovation rates (Acuña et al., 2017).

The data is clear: psychology isn’t optional. It’s a core competency that determines whether technical brilliance translates into business success.

The Psychology Toolkit for Tech

Psychology isn’t a monolithic field—it’s a rich ecosystem of frameworks and insights that can transform how we approach technical challenges. Let’s explore just a few of the most powerful tools.

Cognitive Biases: The Bugs in Human Reasoning

Just as we debug code, we need to debug our thinking. Cognitive biases are systematic errors in reasoning that affect every decision we make, including the technical ones:

Confirmation Bias leads engineers to seek information that supports their preferred solution whilst ignoring alternatives. That’s why teams often stick with familiar technologies even when better options exist.

Sunk Cost Fallacy keeps teams investing in failing projects because of previous effort. We’ve all seen projects that should have been killed months ago but continued because “we’ve already invested so much.”

Planning Fallacy explains why developers consistently underestimate task complexity. It’s not laziness—it’s a predictable cognitive bias that affects every developer (and managers, too).

Availability Heuristic makes recent incidents seem more likely than they actually are, leading to over-engineering for problems that rarely occur. Aka Gold plating.

Understanding these biases doesn’t eliminate them, but it enables us to build processes that account for them. Code reviews help catch confirmation bias. Time-boxed experiments limit sunk cost fallacy. Historical data counteracts planning fallacy.

User Psychology: Beyond A/B Testing

Most product teams approach users like they approach code—looking for deterministic patterns and optimal solutions. But users don’t behave logically; they behave psychologically.

Loss Aversion: People feel losses more acutely than equivalent gains. This affects everything from pricing strategies to feature adoption. Users will stick with inferior solutions rather than risk losing what they already have.

Mental Models: Users approach new interfaces with existing expectations. Fighting these mental models creates friction; aligning with them creates intuitive experiences.

Choice Overload: Contrary to Silicon Valley dogma, more options don’t always create better outcomes. Too many choices can paralyse users and reduce satisfaction even when they do choose.

Social Proof: People follow what others do, especially in uncertain situations. This is why testimonials, usage statistics, and “trending” indicators can dramatically impact adoption.

Motivation Theory: What Actually Drives Performance

The tech industry’s approach to motivation is remarkably naive: pay people well, give them interesting problems, and assume they’ll perform. But decades of research reveal motivation is far more complex.

Self-Determination Theory identifies three psychological needs that drive intrinsic motivation:

Autonomy: People need control over their work. Micromanagement destroys motivation even when well-intentioned. The most productive developers choose their own tools, approaches, and priorities within clear constraints.

Competence: People need to feel effective and capable. This means providing appropriate challenges, learning opportunities, and recognition for growth. Boredom and overwhelm both kill motivation.

Relatedness: Humans need connection and shared purpose. Remote work and competitive environments can undermine this need, leading to disengagement even when technical work is satisfying.

Companies that design roles around these three needs see higher productivity, lower turnover, and more innovation. Companies that ignore them burn through talent despite offering competitive salaries.

Eric Berne’s Transactional Analysis: A Framework for Management

Among psychology’s many insights, one framework stands out for its practical application to management challenges: Eric Berne’s Transactional Analysis (TA).

Developed in the 1950s, TA provides a simple but powerful model for understanding interpersonal dynamics. Berne identified three “ego states” that everyone operates from:

Parent: The inherited voices of authority figures. When we’re in Parent mode, we’re either nurturing (“Let me help you”) or criticising (“You’re doing it wrong”).

Adult: Rational, present-moment thinking. This is where we process information objectively and respond appropriately to current situations.

Child: Our emotional, spontaneous, creative self. This includes both our playful, innovative side and our adapted, compliant side.

Every conversation involves transactions between these ego states. Understanding these patterns can transform management, team and group effectiveness, particularly in the fraught dynamics between management and workers.

TA in Action: Management vs Workers

The Micromanaging Manager

Situation: Sarah, an engineering manager, constantly checks on her senior developers, questions their technical decisions, and demands detailed status reports. Team productivity plummets and two experienced engineers start looking elsewhere.

Traditional Analysis: “Sarah needs to trust her team more. The developers are being defensive.”

TA Analysis: Sarah operates from Criticising Parent (“I need to oversee everything”), which triggers her developers’ Rebellious Child (“Stop treating us like incompetent children”). The developers’ Adult expertise gets bypassed entirely.

Solution: Sarah shifts to Adult state: “What obstacles are blocking your progress? How can I help remove them?” This invites Adult-to-Adult collaboration rather than Parent-to-Child control and confrontation.

The Blame-First Post-Mortem

Situation: After a production incident, CTO Mark runs a post-mortem focused on “who made the mistake.” Junior developer Jenny, who deployed the problematic code, sits silently while Mark questions her testing procedures. The team leaves feeling demoralised rather than enlightened.

TA Analysis: Mark operates from Criticising Parent (“Someone needs to be held accountable”), triggering Jenny’s Adapted Child (shame and withdrawal). Other team members also shift to Child state, afraid they’ll be next.

Solution: Mark engages Adult state: “Let’s understand what systemic issues allowed this to reach production. How do we improve our processes?” This frames the incident as a learning opportunity rather than a blame assignment.

The Innovation Killer

Situation: Technical architect David consistently rejects new ideas from his team with responses like “That’s not how we do things” or “That technology is too risky.” The team stops proposing improvements and settles into maintenance mode.

TA Analysis: David operates from Criticising Parent, prioritising control over innovation. His team’s Natural Child (creativity and enthusiasm) gets suppressed, and they shift to Adapted Child—compliant but disengaged.

Solution: David engages Adult state when evaluating proposals: “Walk me through your thinking. What problems does this solve and what risks do we need to mitigate?” This validates creative thinking while maintaining appropriate oversight.

The Abdication Executive

Situation: VP of Engineering Lisa assigns a complex microservices migration with minimal guidance: “You’re smart people, figure it out.” Three months later, teams are building incompatible services and the project is behind schedule and over budget.

TA Analysis: Lisa operates from Free Child—enthusiastic but irresponsible, delegating without providing necessary structure. Her team is forced into Adapted Child, trying to guess her expectations while being set up for failure.

Solution: Lisa engages Adult state to provide context and constraints: “Here’s why we’re migrating, here are our business and technical constraints, and here’s how we’ll measure success. What approach do you recommend?” This treats her team as professional partners rather than subordinates.

Beyond TA: The Broader Psychology Toolkit

Transactional Analysis is just one tool in a comprehensive psychology toolkit. Other frameworks provide equally valuable insights:

Group Dynamics: Bruce Tuckman’s model of team development—forming, storming, norming, performing—explains why new teams struggle initially and how to accelerate their progression to high performance.

Change Psychology: Understanding why people resist change (loss of control, uncertainty, increased complexity) enables more effective technology adoption and organisational transformation.

Decision Science: Research on how people actually make decisions (versus how we think they should) can improve everything from user interface design to enterprise sales processes.

Behavioural Economics: Insights like anchoring effects, framing bias, and loss aversion can dramatically improve product design, pricing strategies, and user engagement.

The Business Case for Psychological Literacy

Understanding psychology isn’t about being nice—it’s about being effective especially in the domain of people. Companies that develop psychological literacy see measurable improvements:

Better Product-Market Fit: When you understand user psychology—their biases, emotions, and decision-making patterns—you can design experiences that truly resonate rather than just optimising random metrics.

Higher Team Performance: Research consistently shows that team dynamics, motivation, and emotional states directly impact code quality, innovation rates, and delivery speed.

More Effective Fellowship: Fellows who understand frameworks like TA, motivation theory, and cognitive biases make better decisions, communicate more effectively, and build higher-performing teams.

Improved Change Management: Understanding the psychology of change—why people resist it, how they adopt new behaviours, what motivates transformation—enables more successful technology adoptions and organisational changes.

Stronger Customer Relationships: Sales, support, and customer success teams become far more effective when they can recognise psychological patterns and respond appropriately.

Building Psychological Literacy

Developing psychological competence means building skills in several areas:

Pattern Recognition: Learning to identify psychological patterns in yourself and others—ego states in interactions, cognitive biases in decision-making, team dynamics that help or hinder performance.

Framework Fluency: Understanding proven models like TA, motivation theory, cognitive bias research, and team psychology. These aren’t abstract theories—they’re practical tools for solving real problems.

Emotional Intelligence: Developing the ability to recognise and work with emotions rather than pretending they don’t exist or dismissing them as irrelevant to technical work.

Systems Thinking: Recognising that human systems are as complex and important as technical systems. Team dynamics, user behaviour, and organisational culture follow patterns that can be understood and optimised.

Research Literacy: Understanding how to evaluate psychological research and apply evidence-based insights rather than relying on intuition or management fads.

This doesn’t require everyone to become psychologists. It means recognising that psychology offers evidence-based tools for solving the human problems that consistently derail technical projects.And one or two people on a team, with psychology skills, are distinct assets.

The Future Competitive Advantage

Your current tech stack will become obsolete. Your architecture will be rewritten. Your product features and products will be replaced. But organisations that master the human elements of the technology business will build lasting competitive advantages.

The companies that thrive in the next decade won’t just have better engineers—they’ll have better people smarts. They’ll understand what motivates their teams, what drives their customers, and what biases affect their decisions. They’ll build products that work for real humans rather than idealised users. They’ll create environments where people do their best work rather than burning out.

Psychology isn’t a “soft skill” addition to technical competence—it’s a force multiplier that makes everything else more effective. When you understand how people actually think, feel, and behave, you can design better experiences, create more effective teams, make better decisions, and build more successful organisations.

The tech industry’s next breakthrough won’t come from a new programming language or cloud service. It’ll come from finally bridging the gap between technical excellence and psychological mastery.

Because at the end of the day, all technology is about people. The sooner we start working with psychology in mind, the sooner we’ll build things that actually work for the beautifully complex humans who use them.

Further Reading

Acuña, S. T., Gómez, M., & Juristo, N. (2017). An examination of personality traits and how they impact on software development teams. Information and Software Technology, 86, 101-122.

Barros, L. B., Varajão, J., & Helfert, M. (2024). Agile software development projects–Unveiling the human-related critical success factors. International Journal of Information Management, 75, 102737.

Berne, E. (1961). Transactional analysis in psychotherapy: A systematic individual and social psychiatry. Grove Press.

Berne, E. (1964). Games people play: The psychology of human relationships. Grove Press.

Bonnet, D. (2022, September 20). 3 stages of a successful digital transformation. Harvard Business Review.

Deci, E. L., & Ryan, R. M. (2000). The “what” and “why” of goal pursuits: Human needs and the self-determination of behavior. Psychological Inquiry, 11(4), 227-268.

Deming, W. E. (1982). Out of the crisis. MIT Press.

Graziotin, D., Wang, X., & Abrahamsson, P. (2014). Happy software developers solve problems better: psychological measurements in empirical software engineering. PeerJ, 2, e289.

Heath, C., & Heath, D. (2013). Decisive: How to make better choices in life and work. Crown Business.

Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.

Norman, D. A. (2013). The design of everyday things: Revised and expanded edition. Basic Books.

Pink, D. H. (2009). Drive: The surprising truth about what motivates us. Riverhead Books.

Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. Yale University Press.

Tuckman, B. W. (1965). Developmental sequence in small groups. Psychological Bulletin, 63(6), 384-399.

You’re the Mark

A critical examination of how Agile software development transformed from a liberation movement into a wealth extraction mechanism

Twenty-three years ago, seventeen software developers gathered at a ski resort in Utah and thrashed out the Agile Manifesto—a mere 68 words that would spawn a multi-billion dollar global rent-seeking industry. What began as a febrile attempt to improve the lot of software developers has metastasised into the most successful rent-seeking operation in corporate history. Today’s Agile ecosystem extracts enormous wealth from organisations worldwide while delivering pretty much zero value, creating a perfect case study in how good intentions can be weaponised for profit.

A Note to Developers

Most developers reading this already suspect something’s amiss. You’ve likely developed a nuanced, perhaps conflicted relationship with Agile practices—simultaneously recognising their theatrical aspects whilst navigating hiring expectations that demand fluency in the ceremonies. You may have already internalised that story pointing is largely kabuki theatre, that many retrospectives produce no meaningful change, and that velocity metrics often obscure rather than illuminate actual progress.

The challenge isn’t ignorance—it’s entrapment. Even developers who see through the performance still find themselves trapped by industry demand. Job descriptions demand Agile experience. Performance reviews measure engagement with Agile processes. Career advancement often requires demonstrating enthusiasm for practices you privately reject. This creates a sophisticated form of professional Stockholm syndrome where intelligent people participate in systems they recognise as dysfunctional at best because non-participation means no job, no income.

Some of you have found ways to work effectively within or around Agile constraints—delivering value despite the overhead rather than because of it. Others have embraced pragmatic subsets whilst ignoring the more theatrical elements. Still others have built careers on Agile coaching or Scrum mastery and face the uncomfortable reality that your expertise has become part of the rent-seeking apparatus.

The analysis that follows isn’t an attack on your intelligence or choices—it’s an attempt to name the economic forces that have shaped an industry where intelligent people spend increasing amounts of time on activities they know add next to no value, at best.

The Anatomy of Agile Rent Seeking

Rent seeking, in economic terms, occurs when individuals or organisations manipulate the environment to increase their share of existing wealth without creating new value. The modern Agile industry exhibits every hallmark of classic rent-seeking behaviour—extracting wealth from existing economic activity without creating new value.

The Certification Racket

The most obvious manifestation is the explosion of Agile certifications. Scrum Alliance alone has issued over 1.3 million certifications, each demanding fees, training courses, and periodic renewal. These credentials have no regulatory backing, no standardised curriculum, and no measurable correlation with improved outcomes. Yet they’ve become de facto requirements for countless positions.

Consider the absurdity: you can become a ‘Certified Scrum Master’ with a two-day course and $1,400, despite having never managed a software project. The certification teaches you to facilitate meetings and maintain a backlog—activities that competent professionals have done for decades without special training. But the certificate creates artificial scarcity and justifies premium salaries for what amounts to administrative work—classic rent-seeking through credentialism.

The Consultant Multiplication Complex

Agile has created an entire consulting ecosystem that extracts wealth by feeding on organisational anxiety. Anxiety about achieving appreciable ROI from Agile investments they’ve already made or are about to make. Companies spend millions on Agile coaches, transformation consultants, and implementation specialists who lack deep technical expertise but excel at selling process theatre.

These consultants arrive with identical playbooks: conduct ‘maturity assessments,’ implement story point estimation, establish retrospective ceremonies, and create elaborate metrics dashboards. They transform simple development work into elaborate rituals that require their ongoing presence to maintain. The process becomes the product, and the consultants extract rent as indispensable guardians of the process.

Tool Vendor Capture

The Agile ecosystem has spawned specialised software tools that extract rent through expensive, long-term contracts whilst locking organisations into vendor dependency. Jira, Azure DevOps, and dozens of competitors have convinced companies they need sophisticated ‘Agile project management platforms’ to track work that developers previously managed with simple task lists.

These tools don’t improve development velocity—they hinder it with excessive overhead and forced workflows. But they generate subscription revenue whilst creating switching costs that trap organisations. The tools become shelfware that teams work around rather than with, yet the contracts auto-renew annually.

The Value Creation Mirage

Proponents argue that Agile creates value through faster delivery, better collaboration, and improved quality. But where’s the evidence?

The Productivity Paradox

Despite decades of Agile adoption, software productivity remains stagnant or has declined by many measures. The average enterprise software project still runs over budget and behind schedule. Technical debt continues to accumulate. Developer satisfaction surveys consistently rank process overhead as a top frustration.

Meanwhile, the most productive software teams practice development methods that bear no resemblance to ceremonial Agile. They focus on technical excellence, autonomous teams, and minimal process overhead. Their success comes from owning and paying attendtion to the way the work works, and removing obstacles, not from following ceremonies—yet the Agile industry extracts zero rent from these approaches, which explains why they’re rarely promoted.

The Innovation Slowdown

Agile’s emphasis on incremental delivery and user story decomposition actively discourages breakthrough innovation. The methodology breaks everything into small, measurable chunks that can be completed in two-week sprints. This works for maintenance programming but stifles the sustained, exploratory work that produces real advances.

The pressure for continuous delivery means teams avoid ambitious architectural changes or experimental features that disrupt their velocity metrics. Innovation requires periods of unproductive exploration that Agile frameworks penalise.

The Parasitic Nature of Modern Agile

The Agile ecosystem exhibits classic parasitic behaviour—perhaps even vampiric in its sophistication. Like successful parasites, it has evolved to maximise extraction whilst keeping the host organisation just functional enough to provide ongoing sustenance.

The infection spreads through professional networks, with each ‘transformation’ creating new vectors for transmission. Agile consultants don’t merely extract value; they’re blood-sucking entities that create psychological dependency whilst draining organisational vitality. The host organisation experiences symptoms—reduced productivity, innovation suppression, increased overhead—but the parasite has evolved elegant mechanisms to convince the host these symptoms indicate ‘transformation in progress.’

This parasitic industry has perfected the art of seduction over brute force. Rather than simply imposing systems, they seduce organisations with promises of ‘digital transformation’ and ‘competitive advantage.’ Like vampires creating willing thralls, they convert leadership into advocates who spread the infection throughout the organisation, believing themselves enlightened rather than sired.

The parasitic relationship explains why failed Agile implementations invariably lead to more Agile investment. The parasite ensures its survival by convincing the weakened host that salvation requires deeper commitment, more sophisticated tools, and extended coaching. The blood-sucking continues until it becomes normalised as the cost of ‘modern business practices.’

Most tellingly, the parasite suppresses the host’s immune system—the natural organisational instinct to question whether elaborate processes actually improve outcomes. Any attempt to reject the parasite gets reframed as ‘resistance to change’ or ‘lack of understanding,’ ensuring the parasitic relationship continues untrammeled.

The Self-Perpetuating Machine

The accidental genius of the Agile rent-seeking apparatus lies in its self-reinforcing nature and sophisticated psychological protection mechanisms. When Agile implementations fail—which they frequently do—the prescribed solution is always more Agile: additional training, better coaches, more advanced tools, or newer frameworks like SAFe (Scaled Agile Framework for Enterprise—total bullshit, btw).

The industry operates as a mass delusion with profit margins. Any criticism gets deflected with ‘you just don’t understand Agile properly’ or ‘you need better coaching’ or ‘you’re not truly embracing the mindset.’ It’s an Emperor’s New Clothes defence that makes critics the problem, not the approach. The industry and its parasites have successfully convinced organisations that questioning Agile means you’re ‘not getting it’, rather than seeing through an elaborate wealth extraction scheme.

This voluntary rent-seeking represents a key innovation in wealth extraction. Traditional rent-seeking involves regulatory capture or monopolistic practices, but the Agile complex gets organisations to voluntarily pay for their own wealth extraction by convincing them it’s necessary for ‘digital transformation’ and ‘staying competitive.’

The system creates perfect conditions where questioning the value means you’re culturally backwards, failure is always the customer’s fault (insufficient buy-in, wrong coaches, inadequate training), success stories remain anecdotal whilst failures require more investment, and the solution to Agile problems is always more Agile.

SAFe represents the apotheosis of Agile rent seeking. It takes the bureaucracy that Agile originally opposed and rebrands it as ‘scaled Agile practices.’ Organisations that adopted Agile to escape process overhead find themselves implementing elaborate hierarchies of Product Owners, Release Train Engineers, and Solution Architects—all requiring specialised training and certification. And money, money, money. Ka-ching!

Most frameworks’ complexity ensures that organisations need permanent Agile transformation teams and ongoing consulting support. Success is measured not by software quality or business outcomes, but by ‘Agile maturity metrics’ that conveniently require more investment to improve.

The Largest Wealth Destruction Scam in Corporate History

The sheer scale of Agile rent seeking dwarfs any previous rent-seeking operation in corporate history.Conservative estimates place the total economic impact at approximately $1.8 trillion annually in 2025—potentially the largest wealth destruction scheme ever devised.

To put this in perspective, this matches the GDP of countries like Russia ($1.8 trillion) and approaches major economies like Canada ($2.1 trillion). Whilst only a fraction represents direct wealth extraction, the total economic impact from systematically choosing elaborate theatre over effective approaches destroys value equivalent to six times the entire global consulting market ($300 billion annually).

The breakdown reveals the sophistication of the operation:

External Agile Services: $35-50 billion annually from enterprise agile transformation consulting, coaching, and implementation services. The enterprise agile transformation services market reached $35.7 billion in 2023 and continues growing at 17.6% annually, with over 20,000 enterprises using SAFe worldwide.

Corporate Internal Spending: $25-40 billion annually on internal Agile transformation teams, process overhead, and organisational restructuring. 70% of Fortune 100 companies have SAFe implementations, requiring substantial ongoing internal investment beyond external consulting.

Enterprise Software Ecosystem: $10-15 billion in Agile-specific tool licensing and platform fees. Atlassian generates $4.3 billion annually with over 127,528 companies using Jira globally, representing just one vendor in a vast ecosystem of process-centric platforms that add questionable value.

The Certification Mill: $3-5 billion in credentialing, training, and continuing education fees. Despite Scrum Alliance generating only $74 million annually, the global certification ecosystem encompasses hundreds of bodies extracting fees from over 2 million SAFe practitioners and 1.5 million Scrum Alliance certifications.

Direct Rent-Seeking Total: $73-110 billion annually in measurable wealth extraction.

Opportunity Costs: $1.71 trillion annually—the true cost of systematically rejecting approaches that actually work in favour of elaborate theatre. With global software development spending at $570 billion annually, this represents three times the entire industry’s expenditure wasted by choosing process-heavy rent-seeking over people-centric methods that managers systematically reject because they can’t be monetised. The greatest waste isn’t Agile’s inefficiency, but the productivity gains foregone by refusing to trust developers, eliminate process overhead, and focus on outcomes rather than ceremonies.

No management consulting scam in history approaches this scale. McKinsey’s global revenue is roughly $15 billion annually—the Agile complex destroys 120 times that amount whilst delivering demonstrably worse outcomes. It represents the perfect storm of rent-seeking: voluntary adoption, self-reinforcing mechanisms, psychological capture, and a product (meetings and processes) with essentially zero marginal cost to produce.

Agile has been weaponised against the very people it was meant to help. The developers who created the manifesto to escape bureaucratic oppression are now the primary victims—being sold a corrupted version of their own liberation movement. They’re not just marks; they’re marks being conned with their own revolutionary manifesto.

The 68-word manifesto created to help software developers has spawned a nearly $2 trillion industry that primarily exists to extract wealth from the organisations it claims to help. This isn’t just rent-seeking—it’s wealth destruction on an unprecedented scale.

Breaking Free from the Industrial Complex

The original Agile Manifesto emphasised ‘individuals and interactions over processes and tools.’ I’ll say that again: the Agile Manifesto emphasised ‘individuals and interactions over processes and tools.’ Today’s Agile industry has inverted these priorities, creating elaborate and fee-winning processes that constrain individuals and expensive tools that complicate interactions.

Successful software development doesn’t require Agile certification programmes, specialised consultants, or enterprise platforms. It requires empowered and motivated people with clear goals, adequate resources, and minimal interference. Some companies successfully building software figured this out long ago.

The Agile industrial complex persists because it sells comfort and blame-avoidance to anxious managers who prefer following established processes to making difficult decisions about technology, people and work. But that comfort comes at an enormous price—one that’s extracted from productive work and diverted to rent seekers who’ve weaponised professional anxiety into a profit centre.

It’s way past time to recognise Agile for what it’s become: not a development approach, but a sophisticated rent-seeking mechanism that enriches consultants whilst impoverishing the craft of software development and the businesses that  depend on it. It’s too sophisticated and voluntary to be called racketeering (Cf. RICO)—it’s just exceptionally effective rent-seeking that operates through willing participation rather than criminal coercion. (Personally, I’d call it criminal, but that’s me).

Agile has been weaponised against the very people it was meant to help. The developers who supported the Agile Manifesto to escape bureaucratic oppression are now the primary victims—being sold a corrupted version of their own liberation movement. They’re not just marks; they’re marks being conned with their own revolutionary manifesto.

If you’re paying for Agile certifications, consultants, and tools, you’re being played.

The emperor’s new clothes were always just clothes, and good software was being built long before anyone needed a certificate to prove they could facilitate a standup meeting.


Further Reading

Beck, K., Beedle, M., van Bennekum, A., Cockburn, A., Cunningham, W., Fowler, M., … & Thomas, D. (2001). Manifesto for agile software development. Retrieved from http://agilemanifesto.org/

Buchanan, J. M., Tollison, R. D., & Tullock, G. (Eds.). (1980). Toward a theory of the rent-seeking society. Texas A&M University Press.

DeMarco, T., & Lister, T. (2013). Peopleware: Productive projects and teams (3rd ed.). Addison-Wesley Professional.

Krueger, A. O. (1974). The political economy of the rent-seeking society. American Economic Review, 64(3), 291-303.

Little, T. (2005). Context-adaptive agility: Managing complexity and uncertainty. IEEE Software, 22(3), 28-35.

McConnell, S. (2006). Software estimation: Demystifying the black art. Microsoft Press.

Menzies, T., Butcher, A., Cok, D., Marcus, A., Layman, L., Shull, F., … & Zimmermann, T. (2013). Local versus global lessons for defect prediction and effort estimation. IEEE Transactions on Software Engineering, 39(6), 822-834.

Sommerville, I. (2015). Software engineering (10th ed.). Pearson.

Tullock, G. (1967). The welfare costs of tariffs, monopolies, and theft. Western Economic Journal, 5(3), 224-232.


The author has worked in software development for over five decades and has witnessed the transformation of Agile from grassroots liberation movement to corporate industrial oppression complex. 

Why Europeans Reject Their Own Tech Innovations But Worship Americans’

A British Pioneer’s 30-Year Journey Through the European Inferiority Complex

The current wave of anti-American sentiment—really anti-Trump sentiment—feels familiar to me. After 53 years in software development, I’ve watched this pattern repeat itself for decades. It’s become such old hat. What makes this ironic is how it contrasts with Europeans’ continued looking up to American influence in technology, including my own experience with what would later be called Agile software development.

Seven Years Before Snowbird

Back in 1994, seven years before the infamous gathering at Snowbird ski resort resulted in the Agile Manifesto, I was developing my own approach. It was a Briish version of what would later be called Scrum. We called it Jerid (now Javelin), developed independently of any American work—or even the original Japanese ideas from Takeuchi and Nonaka’s 1986 ‘The New New Product Development Game’.

The foundational concepts of ‘Snowbird Agile’ weren’t American at all—they came from Japanese manufacturing insights about rugby-style team approaches. Yet here I was, a Brit, independently developing similar collaborative methods. Americans would later brand and package what had Japanese origins and European development.

Whilst managers on both sides of the Atlantic were still forcing waterfall methods and heavy processes on their development teams, we were pioneering collaborative approaches that emphasised attending to real human needs.

The Support That Never Came

Did I get support from my fellow Europeans for this early work? Not on your nelly. Although, to be fair, I was operating under the radar until around 2000. I preferred to be doing the work, at the coalface, rather than talk and write about it.

When I did start sharing around 2000—still a year before Snowbird—the response was scepticism, indifference, and institutional resistance. European software companies were comfortable with their (lame) established processes. The idea that we needed to rethink how we approached software development was met with the same enthusiasm typically reserved for a bath in dog sick.

The very principles I had been advocating were being dismissed as ‘too informal’, ‘lacking rigour’, or simply ‘not how we do things here’.

The Psychology of European Tech Looking-Up-To-America

Here’s an extraordinary case study in how European thinking works when it comes to American influence in computing.

1986: Japanese scholars Hirotaka Takeuchi and Ikujiro Nonaka publish ‘The New New Product Development Game’. This introduces novel concepts about rugby-style team collaboration, later to influence Scrum (Jeff Sutherland and Ken Schwaber).

European response: Ignored.

1994: I independently develop Jerid (now Javelin), using these same collaborative principles.

European response: Rejected. ‘Not how we do things here.’

2001: Americans gather at Snowbird and package these globally-sourced concepts into the ‘Agile Manifesto’.

European response: Immediate, enthusiastic developer adoption. ‘We must implement American Agile practices!’

The same Europeans that had spent fifteen years ignoring Japanese innovation and rejecting the new British approach suddenly discovered these ideas were brilliant—the moment they carried American branding.

This isn’t just ‘not invented here’ syndrome. This is specifically ‘not invented by Americans’ syndrome. Europeans showed they would rather ignore breakthrough thinking from Japan (much the same as with Lean) and reject British innovation than risk appearing presumptuous about trailblazing in technology.

The message was clear: Only Americans have the authority to say what computing methods are good. Even when the ideas originated in Japan. Even when Europeans developed them independently. Even when the evidence was right in front of them for years.

Why Europeans Need American Permission

Thomas Kuhn’s work explains what happened. European institutions couldn’t recognise a big change when it emerged from within their own context. They needed outside approval from what they saw as the top authority—American software development culture.

Europeans have a massive feeling of being inferior to Americans, especially when it comes to computers.

Beyond Even the Original Innovation

I wouldn’t even use Javelin today. I learned much with its help, but I’ve moved beyond it. I’ve developed elements of a more people-oriented approach – such as: the Antimatter Principle, FlowChain, Prodgnosis / FlowGnosis, and Quintessence. These build on Javelin’s fundamental principles whilst addressing the people orientation that Agile’s industrialisation completely abandoned.

European organisations are still implementing corrupted versions of 30-year-old thinking that they initially rejected. Actual innovation has moved decades beyond where they’re trying to catch up. They’re not just behind where I was in 1994—they’re still catching up to where I was in 1994.

The European feeling of being inferior cost them the opportunity to participate in the entire evolution of human-centred development approaches. Whilst they were waiting for American approval of ideas they’d already rejected, the real work continued elsewhere.

The American Brand, European Complicity

Anti-Trump sentiment won’t solve this europeans looking-up-to-America problem. Political feelings about America are separate from Europeans’ need to follow Americans’ lead in software development. European organisations implemented American Agile processes just as enthusiastically as anyone else, not because of American political influence, but because of their ingrained belief that Americans know better when it comes to technology.

Today’s anti-Trump sentiment makes this even more ironic. Europeans can maintain strong political criticisms of American leadership whilst simultaneously following American leadership for software development methods. And given the anti-human direction of Trump’s America, this becomes yet more ironic, and disturbing. Europeans continue seeking validation from an american tech culture increasingly moving away from human-centred values.

The real enemy isn’t American political influence. It’s Europeans’ collective willingness to mistake American tech branding for being inherently superior.

What This Means

As someone who lived through the birth of Agile methods from a European perspective—whilst working independently of both Japanese origins and American development—I know that the value of an idea isn’t determined by its passport. Neither is it determined by its popularity or market success. Despite what rent-seeking consulting companies might opine

The Agile principles we developed in 1994 were sound because they connected technical work with human purpose. Agile became corrupted not because Americans touched it, but because we all allowed market forces to transform human-centred practices into consultant-centred industries. This happened regardless of whether those practices had Japanese, European, or American origins.

The current anti-Trump sentimentt reveals how Europeans can dislike American politics whilst still thinking Americans know best about technology. They still wait for American leadership before embracing new ideas.

The implications are worth considering. When institutions consistently dismiss local innovation whilst embracing identical ideas with foreign branding, what does that say about their ability to recognise value? When political independence coexists with technological subservience, what opportunities are being missed? When developers wanted to make a difference through software but got redirected into process optimisation, what problems remain unsolved?

Further Reading

Beck, K., Beedle, M., van Bennekum, A., Cockburn, A., Cunningham, W., Fowler, M., Grenning, J., Highsmith, J., Hunt, A., Jeffries, R., Kern, J., Marick, B., Martin, R. C., Mellor, S., Schwaber, K., Sutherland, J., & Thomas, D. (2001). Manifesto for Agile Software Development. Agile Alliance. https://agilemanifesto.org/

Kuhn, T. S. (1962). The structure of scientific revolutions. University of Chicago Press.

Takeuchi, H., & Nonaka, I. (1986). The new new product development game. Harvard Business Review, 64(1), 137-146.


The author has 53 years of software development experience and in 1994 created Jerid (now Javelin), a version of what 7 years later became known as the Agile approach to software development. He continues to write about the disconnection between technical work and human purpose, albeit to little avail, but persists.

Sharing the Good Stuff

We discover brilliant content all the time. We’re uniquely positioned to curate value for our networks of friedns, colleagues, peers, etc.

We know our circles better than anyone—the contexts our colleagues work in, the challenges our industry faces, the conversations that spark energy amongst our connections. This puts us in the perfect position to recognise what will resonate beyond our own experience.

We’re Already Doing the Work

We’re constantly reading, learning, and bookmarking insights that catch our attention. We’ve done the hard part: finding and evaluating content worth our time. Sharing it extends that value with minimal additional effort.

When we share something that made us think, we’re exercising our judgement publicly and building our reputation as someone who finds worthwhile ideas.

We Control Every Aspect

Our voice matters: We add context about why something caught our attention, or we let the content speak for itself. Our commentary shapes how others receive it.

We pick the platform: Professional networks, social media, direct messages, water cooler, casual conversations. We know which fits our style and goals.

We choose our audience: Broad sharing, targeted sends, or specific tags. We understand the relationships and determine what feels appropriate.

We set the tone: Our sharing starts discussions, provides resources, or offers interesting reading. The framing is ours to control.

What We Get Back

Sharing creates unexpected connections and strengthens existing relationships. We become known as valuable connectors who surface useful insights. We contribute to our professional community’s collective knowledge and establish our voice in important discussions.

Most importantly, we discover what resonates with our networks and refine our ability to spot content that travels well.

The Choice Is Always Ours

That article in your saved items folder represents potential energy. You found it valuable enough to bookmark. Now you decide whether to amplify its impact or keep it for yourself.

You’re the best judge of what deserves wider circulation. The option is always there when you’re ready to use it.

What’s the last piece of content you shared that sparked genuine conversation and connection? Those moments show the power of good curation.

Secrets of Techhood

A collection of hard-won wisdom from the trenches of technology work

After decades building software, leading teams, and watching organisations succeed and fail, certain patterns emerge. The same mistakes get repeated. The same insights get rediscovered. The same hard-learned lessons get forgotten and relearnt by the next generation.

This collection captures those recurring truths—the kind of wisdom that comes from doing the work, making the mistakes, and living with the consequences. These aren’t theoretical principles from academic papers or management books. They’re the practical insights that emerge when life meets reality, when teams face real deadlines, and when software encounters actual users.

The insights come from diverse sources: legendary systems thinkers like W.E. Deming and Russell Ackoff, software pioneers, quality experts, organisational psychologists, and practising technologists who’ve shared their hard-earned wisdom. What unites them is practical relevance—each aphorism addresses real challenges that technology professionals face daily.

Use this collection as a reference, not a rulebook. Read through it occasionally. Return to specific aphorisms when facing related challenges. Share relevant insights with colleagues wrestling with similar problems. Most importantly, remember that wisdom without application is just interesting trivia.

The technology changes constantly, but the fundamental challenges of building systems, working with people, and delivering value remain remarkably consistent. These truths transcend programming languages, frameworks, and methodologies. They’re about the deeper patterns of how good technology work gets done.

Invitarion: I’d love for readers to suggest their own aphorisms for inclusion in this collection. Please use the comments, below.

The Aphorisms

It’s called software for a reason.

The ‘soft’ in software reflects its fundamental nature as something malleable, changeable, and adaptive. Unlike hardware, which is fixed once manufactured, software exists to be modified, updated, and evolved. This flexibility is both its greatest strength and its greatest challenge. The ability to change software easily leads to constant tweaking, feature creep, and the temptation to fix everything immediately. Yet this same flexibility allows software to grow with changing needs, adapt to new requirements, and evolve beyond its original purpose.

Learning hasn’t happened until behaviour has changed.

Consuming tutorials, reading documentation, and attending conferences is information absorption. True learning in tech occurs when concepts become internalised so deeply that they alter how problems are approached. Data analysis learning is complete when questioning data quality and looking for outliers becomes instinctive. Project management mastery emerges when breaking large problems into smaller, manageable pieces happens automatically.

Change hasn’t happened unless we feel uncomfortable.

Real change, whether learning a new technology, adopting different processes, or transforming how teams work, requires stepping outside comfort zones. If a supposed change feels easy and natural, you’re just doing familiar things with new labels. Genuine transformation creates tension between old habits and new ways of working.

The work you create today is a letter to your future self—create with compassion.

Six months later, returning to a project with fresh eyes and foggy memory is jarring. The folder structure that seems obvious today becomes a confusing maze tomorrow. The clever workflow that feels brilliant now frustrates that future self. Creating work as if explaining thought processes to a colleague makes sense—because that’s what’s happening across time.

Documentation is love made visible.

Good documentation serves as an act of kindness towards everyone who will interact with the work, including one’s future self. It bridges current understanding and future confusion. When processes are documented, decisions explained, or clear instructions written, there’s an implicit message: ‘I care about your experience with this work.’ Documentation transforms personal knowledge into shared resources.

Perfect is the enemy of shipped, and also the enemy of good enough.

The pursuit of perfection creates endless cycles of refinement that prevent delivery of value. Hours spent polishing presentations that already communicate effectively could address new problems or serve unmet needs. Yet shipping imperfection carries risks too—reputation damage, user frustration, or technical debt. Sometimes ‘done’ creates more value than ‘perfect’, especially when perfect never arrives.

Every problem is a feature request from reality.

Issues reveal themselves as more than annoying interruptions—they’re signals about unconsidered edge cases, incorrect assumptions, or untested scenarios. Each problem illuminates gaps between mental models of how things work and how they actually work in practice. When users struggle with an interface, they’ve submitted an unspoken feature request for better design.

The best problem-solving tool is a good night’s sleep.

The brain processes and consolidates information during sleep, revealing solutions that remained hidden during conscious effort. Challenges that consume hours of focused attention resolve themselves in minutes after proper rest. Sleep deprivation clouds judgement, reduces pattern recognition, and obscures obvious solutions.

Premature optimisation is the root of all evil, but so is premature pessimisation.

Whilst rushing to optimise before understanding the real bottlenecks is wasteful, it’s equally dangerous to create obviously inefficient processes under the banner of ‘we’ll fix it later.’ Don’t spend days perfecting workflows that run once, but also don’t use manual processes when simple automation would work just as well.

Your first solution is rarely your best solution, but it’s always better than no solution.

The pressure to find the perfect approach immediately creates analysis paralysis. First attempts prove naïve, inefficient, or overly complex, yet they provide crucial starting points for understanding problem spaces. Working solutions enable iteration, refinement, and improvement.

A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work.

John Gall’s Law captures a fundamental truth about how robust systems come into being. They aren’t architected in their final form—they grow organically from working foundations. The most successful large systems started as simple, functional prototypes that were gradually extended.

The hardest parts of tech work are naming things, managing dependencies, and timing coordination.

These three fundamental challenges plague every technology professional daily. Naming things well requires understanding not just what something does, but how it fits into the larger system and how others will think about it. Managing dependencies is difficult because it requires reasoning about relationships, priorities, and changes across multiple systems or teams.

Feedback is not personal criticism—it’s collaborative improvement.

When colleagues suggest changes to work, they’re investing their time and attention in making the outcome better. They’re sharing their knowledge, preventing future issues, and helping with professional growth. Good feedback is an act of collaboration, not criticism.

People will forgive not meeting their needs immediately, but not ignoring them.

Users, stakeholders, and colleagues understand that resources are limited and solutions take time. They accept that their need might not be the highest priority or that the perfect solution requires careful consideration. What damages relationships is complete neglect—not making any effort, not showing any care, not demonstrating that their concern matters. People can wait for solutions when they see genuine attention being paid to their situation. The difference between delayed action and wilful neglect determines whether trust grows or erodes. Attending to needs doesn’t require immediate solutions, but it does require genuine care and effort.

How you pay attention matters more than what you pay attention to.

The quality of attention transforms both the observer and the observed. Distracted attention whilst multitasking sends a clear message about priorities and respect. Focused, present attention—even for brief moments—creates connection and understanding. When reviewing code, listening with genuine curiosity rather than hunting for faults leads to better discussions and learning. When meeting with stakeholders, being fully present rather than mentally composing responses changes the entire dynamic. The manner of attention—rushed or patient, judgmental or curious, distracted or focused—shapes outcomes more than the subject receiving that attention.

Caring attention helps things grow.

Systems, teams, and individuals flourish under thoughtful observation and nurturing focus. When attention comes with genuine care—wanting to understand, support, and improve rather than judge or control—it creates conditions for development. Code improves faster when reviewed with constructive intent rather than fault-finding. Team members develop more rapidly when mistakes are examined with curiosity rather than blame. Projects evolve more successfully when monitored with supportive interest rather than suspicious oversight. The difference between surveillance and stewardship lies in the intent behind the attention.

The best work is work you don’t have to do.

Every process created needs to be maintained, updated, and explained. Before building something from scratch, considering whether an existing tool, service, or approach already solves the problem pays off. The work not done can’t break, doesn’t need updates, and never becomes technical debt.

Every expert was once a beginner who refused to give up.

Experience and expertise aren’t innate talents—they’re the result of persistence through challenges, failures, and frustrations. The senior professionals admired today weren’t born knowing best practices or troubleshooting techniques. They got there by continuing to learn, experiment, and problem-solve even when things felt impossibly difficult.

Your ego is not your work.

When others critique work, they engage with output rather than character. Suggestions for improvement, identified issues, or questioned decisions focus on the work itself, not personal worth. Work can be improved, revised, or completely replaced without diminishing professional value.

Testing is not about proving a solution works—it’s about showing where the work is at.

Good testing reveals current status rather than validating perfection. Tests illuminate what’s functioning, what’s broken, what’s missing, and what’s uncertain. Rather than serving as a stamp of approval, testing provides visibility into the actual state of systems, processes, or solutions.

The most expensive work to maintain is work that almost functions.

Work that fails obviously and consistently is easy to diagnose and fix. Work that functions most of the time but fails unpredictably is a maintenance nightmare. These intermittent issues are hard to reproduce, difficult to diagnose, and mask deeper systematic problems.

Changing things without understanding them is just rearranging the furniture.

When modifying systems, processes, or designs without adequate understanding of how they currently work, there’s no way to verify that essential functionality has been preserved. Understanding serves as a foundation for meaningful change, giving confidence that modifications improve things rather than just moving problems around.

Version control is time travel for the cautious.

Document management systems and change tracking tools let experimentation happen boldly because previous states can always be restored if things go wrong. They remove the fear of making changes because nothing is ever truly lost. Radical reorganisations, experimental approaches, or risky optimisations become possible knowing that reversion to the last known good state remains an option.

Any organisation that designs a system will produce a design whose structure is a copy of the organisation’s communication structure.

Conway’s Law reveals why so many software architectures mirror the org charts of the companies that built them. If you have separate teams for frontend, backend, and database work, you’ll end up with a system that reflects those boundaries—even when a different architecture would serve users better.

Question your assumptions before you question your code.

Most problems stem not from implementation errors but from incorrect assumptions about how systems work, what users will do, or how data will behave. Assumptions about network reliability, that users will provide valid input, that third-party services will always respond, or that files will always exist where expected become embedded in work as implicit requirements that aren’t tested or documented.

The problem is always in the last place you look because you stop looking after you find it.

This humorous observation about troubleshooting reflects a deeper truth about problem-solving methodology. Issues are searched for in order of assumptions about likelihood, starting with the most obvious causes. When problems are found, searching naturally stops, making it definitionally the ‘last’ place looked.

Your production environment is not your testing environment, no matter how much you pretend it is.

Despite best intentions, many teams end up using live systems as their primary testing ground through ‘quick updates,’ ‘minor changes,’ and ‘simple fixes.’ Production environments have different data, different usage patterns, different dependencies, and different failure modes than development or testing environments.

Every ‘temporary solution’ becomes a permanent fixture.

What starts as a quick workaround becomes enshrined as permanent process. The ‘temporary fix’ implemented under deadline pressure becomes the foundation that other work builds upon. Before long, quick hacks become load-bearing infrastructure that’s too risky to change.

The work that breaks at the worst moment is always the work you trusted most.

Murphy’s Law applies strongly to technology work. The elegant, well-tested system that generates pride will find a way to fail spectacularly at the worst possible moment. Meanwhile, the hacky workaround that needed fixing will run flawlessly for years. Confidence leads to complacency, which creates blind spots where unexpected failures hide.

Always double-check the obvious.

Paranoia is a virtue in technology work. Even when certain about how a system works, validating assumptions, checking inputs, and considering edge cases remains worthwhile. Systems change, dependencies update, and assumptions that were true yesterday are not true today.

Notes are not apologies for messy work—they’re explanations for necessary complexity.

Good documentation doesn’t explain what the work does but why it does it. It explains business logic, documents assumptions, clarifies non-obvious decisions, and provides context that can’t be expressed in the work itself. Notes that say ‘process these files’ are useless, but notes that say ‘Account for timezone differences in date processing’ add valuable context.

The fastest process is the process that never runs.

Performance optimisation focuses on making existing processes run faster, but the biggest efficiency gains come from avoiding work entirely. Can expensive calculations be cached? Can results be precomputed? Can unnecessary steps be eliminated? The most elegant solution is recognising that certain processes don’t need to execute at all under common conditions.

The systems that people work in account for 95 per cent of performance.

W.E. Deming’s insight: Most of what we attribute to individual talent or effort is determined by the environment, processes, and systems within which people operate. If the vast majority of performance comes from the system, then improving the system yields far greater returns than trying to improve individuals within a flawed system.

Individual talent is the 5 per cent that operates within the 95 per cent that is system.

Deming’s ratio explains why hiring ‘rock stars’ to fix broken systems fails, whilst putting competent people in well-designed systems consistently produces exceptional results. A brilliant programmer in a dysfunctional organisation will struggle, whilst an average programmer in a good system can accomplish remarkable things. The 5% individual contribution becomes meaningful only when the 95% system component enables and amplifies it.

Unless you change the way you think, your system will not change and therefore, its performance won’t change either.

John Seddon’s insight cuts to the heart of why so many improvement initiatives fail. Teams implement new processes, adopt new tools, or reorganise structures whilst maintaining the same underlying assumptions and beliefs that created the original problems. Real change requires examining and challenging the mental models, assumptions, and beliefs that shape how work gets designed and executed.

People are not our greatest asset—it’s the relationships between people that are our greatest asset.

Individual talent matters, but the connections, communication patterns, and collaborative dynamics between team members determine success more than any single person’s capabilities. The most effective teams aren’t composed of the most talented individuals, but of people who work well together and amplify each other’s strengths.

A bad system will beat a good person every time.

Individual competence and good intentions can’t overcome fundamentally flawed processes or organisational structures. When systems create conflicting incentives, unclear expectations, or impossible constraints, even capable people struggle to succeed. Good people in bad systems become frustrated, whilst average people in good systems accomplish remarkable things.

You can’t inspect quality in—it has to be built in.

Quality comes from improvement of the production process, not from inspection. Good systems prevent defects rather than just catching them. The most effective quality assurance focuses on improving how work gets done, not on finding problems after they occur.

The righter we do the wrong thing, the wronger we become. Therefore, it is better to do the right thing wrong than the wrong thing right.

Russell Ackoff’s insight highlights that effectiveness (doing the right things) must come before efficiency (doing things right). Becoming more efficient at the wrong activities compounds the problem. Focus first on whether you should be doing something before worrying about how well you do it.

Efficiency is doing things right; effectiveness is doing the right things.

Peter Drucker’s classic distinction reminds us that there’s little value in optimising processes that shouldn’t exist in the first place. The greatest risk for managers is the confusion between effectiveness and efficiency. There is nothing quite so useless as doing with great efficiency what should not be done at all.

The constraint determines the pace of the entire system.

In any process or organisation, one bottleneck limits overall performance regardless of how fast other parts operate. Optimising non-constraint areas looks productive but doesn’t improve system output. Finding and focusing improvement efforts on the true constraints provides the greatest leverage for overall performance gains.

Innovation always demands we change the rules.

When we adopt new approaches that diminish limitations, we must also change the rules that were created to work around those old limitations. Otherwise, we get no benefits from our innovations. As long as we obey the old rules—the rules we originally invented to bypass the limitations of the old system—we continue to behave as if the old limitations still exist.

In God we trust; all others bring data.

Decisions improve when based on evidence rather than assumptions, but data alone doesn’t guarantee good choices. Numbers mislead as easily as they illuminate, especially when they reflect measurement artefacts rather than underlying realities. Data provides a foundation for discussion and decision-making, but wisdom comes from interpreting that data within context.

Every bug you ship becomes ten support tickets.

John Seddon’s ‘failure demand’ reveals how poor quality creates exponential work. When you don’t get something right the first time, you generate cascading demand: customer complaints, support calls, bug reports, patches, and rework. It’s always more expensive to fix things after customers find them than to prevent problems in the first place.

Technical debt is like financial debt—a little helps you move fast, but compound interest will kill you.

Strategic shortcuts can accelerate delivery when managed carefully. Taking on some technical debt to meet a critical deadline or test market assumptions is valuable. But unmanaged technical debt accumulates interest through increased maintenance costs, slower feature development, and system brittleness.

The best code is no code at all.

Every line of code written creates obligations—debugging, maintenance, documentation, and ongoing support. Before building something new, the most valuable question is whether the problem needs solving at all, or whether existing solutions already address the need adequately. Code that doesn’t exist can’t have bugs, doesn’t require updates, and never becomes technical debt.

Start without IT. The first design has to be manual.

Before considering software-enabled automation, first come up with manual solutions using simple physical means, like pin-boards, T-cards and spreadsheets. This helps clarify what actually needs to be automated and ensures you understand the process before attempting to digitise it.

Simple can be harder than complex—you have to work hard to get your thinking clean.

Achieving simplicity requires understanding problems deeply enough to eliminate everything non-essential. Complexity masks incomplete understanding or unwillingness to make difficult choices about what matters most. Simple solutions demand rigorous thinking about core requirements, user needs, and essential functionality.

Design is how it works, not how it looks.

Visual aesthetics matter, but they serve the deeper purpose of supporting functionality and user experience. Good design makes complex systems feel intuitive, reduces cognitive load, and guides users towards successful outcomes. When appearance conflicts with usability, prioritising function over form creates better long-term value.

Saying no is more important than saying yes.

Focus emerges from deliberately choosing what not to do rather than just deciding what to pursue. Every opportunity accepted means other opportunities foregone, and attention is always limited. Organisations that try to do everything accomplish nothing well. Strategic success comes from identifying the few things that matter most and declining everything else.

Organisational effectiveness = f(collective mindset).

The effectiveness of any organisation is determined by the shared assumptions, beliefs, and mental models of the people within it. Technical solutions, processes, and structures matter, but they’re all constrained by the underlying collective mindset that shapes how people think about and approach their work.

Technologists who dismiss psychology as ‘soft science’ are ignoring the hardest variables in their systems.

Technical professionals gravitate toward problems with clear inputs, logical processes, and predictable outputs. Psychology feels messy and unquantifiable by comparison. But the human elements—motivation, communication patterns, cognitive biases, team dynamics—determine whether technical solutions succeed or fail in practice.

Code review isn’t about finding bugs—it’s about sharing knowledge.

Whilst catching defects has value, the real benefit of code reviews lies in knowledge transfer, spreading understanding of the codebase, sharing different approaches to solving problems, and maintaining consistency in coding standards. Good reviews help prevent knowledge silos and mentor junior developers.

All estimates are wrong. Some are useful.

Software estimates are educated guesses based on current understanding, not commitments or predictions. They’re useful for planning, prioritising, and making resource allocation decisions, but they shouldn’t be treated as contracts or promises. Use them as tools for discussion and planning, and remember that their primary value is in helping make better decisions.

Security is not a feature you add—it’s a discipline you practise.

Security can’t be bolted on after the fact through penetration testing or security audits alone. It must be considered throughout design, development, and deployment. Security is about creating systems that are resistant to attack by design, not just finding and fixing vulnerabilities after they’re built.

Your users will break your software in ways you never imagined—and they’re doing you a favour.

Real users in real environments expose edge cases, assumptions, and failure modes that controlled testing misses. They use your software in contexts you never considered, with data you never anticipated, and in combinations you never tested. Each break reveals gaps in your mental model of how the system should work.

Refactor before you need to, not when you have to.

Continuous small refactoring prevents code from becoming unmaintainable. When you’re forced to refactor, you’re already behind and under pressure, which leads to rushed decisions and compromised quality. Build refactoring into your regular development rhythm, not as crisis response.

If you can’t measure it breaking, you can’t fix it reliably.

Systems need observable failure modes through monitoring, logging, and alerting. Without visibility into system health and failure patterns, you’re debugging blindly and fixing symptoms rather than root causes. Good monitoring tells you not just that something broke, but why it broke and how to prevent it from happening again.

Knowledge sharing is not cheating—it’s collaborative intelligence.

Technology work has always been collaborative, and online communities represent the democratisation of knowledge sharing. Looking up solutions to common problems isn’t cheating—it’s efficient use of collective wisdom. The key is understanding the solutions found rather than blindly copying them.

Error messages are breadcrumbs, not accusations.

Error messages aren’t personal attacks on competence—they’re valuable clues about what went wrong and how to fix it. Good error messages tell a story about what the system expected versus what it encountered. Learning to read error messages carefully and use troubleshooting data effectively is a crucial skill.

Collaboration is not about sharing tasks—it’s about sharing knowledge.

The value of collaborative work isn’t in the mechanical division of labour—it’s in the knowledge transfer, real-time feedback, and shared problem-solving that occurs. When professionals collaborate effectively, they share different perspectives, catch each other’s mistakes, and learn from each other’s approaches.

The most important skill in technology is knowing when to start over.

Abandoning problematic systems or processes and starting fresh proves more efficient than continuing to patch existing work. When complexity accumulates beyond economical improvement, when foundational assumptions prove flawed, or when requirements shift dramatically, fresh starts offer better paths forward.

Remember: Every expert was once a disaster who kept learning.

Further Reading

Ackoff, R. L. (1999). Re-creating the corporation: A design of organizations for the 21st century. Oxford University Press.

Conway, M. E. (1968). How do committees invent? Datamation, 14(4), 28-31.

Deming, W. E. (2000). Out of the crisis. MIT Press. (Original work published 1986)

Drucker, P. F. (2006). The effective executive: The definitive guide to getting the right things done. HarperBusiness. (Original work published 1967)

Gall, J. (2002). The systems bible: The beginner’s guide to systems large and small (3rd ed.). General Systemantics Press. (Original work published 1975)

Marshall, R. W. (2021). Quintessence: An acme for software development organisations. Falling Blossoms.

Seddon, J. (2019). Beyond command and control. Vanguard Consulting.

How We Broke 40 Million Developers: An Agile Pioneer’s Lament

I weep endless tears for all the folks who have poured so much into such a fruitless pursuit.

Here’s the cruelest irony of our industry: developers become developers because they want to make a difference. They want to solve problems that matter. They want to build things that change how people work and live. They’re drawn to the craft because it has power—real power to transform the world.

And then we gave them Agile.

After 53 years in software development—including working on the practices that became Agile back in 1994—I’ve watched multiple generations of brilliant people get their desire for impact redirected into perfecting processes that make no measurable difference whatsoever.

The Numbers Are Staggering

There are something like 30-45 million software developers worldwide today. Around 90% of them claim to practise Agile in some form. That’s 40 million people who wanted to change the world, now spending their days in fruitless stand-ups and retrospectives.

Forty million brilliant minds. All trying to make an impact. All following processes that prevent them from making any impact at all.

What They Actually Do All Day

Instead of solving hard problems, they estimate story points. Instead of designing elegant systems, they break everything into two-week chunks. Instead of thinking deeply about what users actually need, they manage backlogs of features nobody asked for.

They spend hours in planning meetings for work that gets thrown away. They refine processes that don’t improve outcomes. They attend retrospectives where teams discuss why nothing meaningful changed, then agree to keep doing the same things.

The very people who could advance computing spend their time perfecting ceremonies that have made zero measurable difference to software quality after 23 years of widespread use.

The Evidence of Irrelevance

Here’s what’s particularly damning: every study claiming Agile ‘works’ only compares it to ‘Waterfall’, not to how software was actually built before these formal processes took over. Before the 1990s, most software was built without elaborate frameworks—programmers talked to users, wrote code, fixed bugs, and shipped products.

But here’s the deeper issue: better software was never the aim. The actual aim was better attending to folks’ needs. So measuring software quality improvements misses the point entirely.

Yet after more than 20 years of Agile domination, are we better at attending to people’s needs? Are users getting products and services that genuinely serve them better? Are the real human needs being attended to more effectively?

The evidence suggests not. We have more process, more ceremony, more optimisation of team interactions—but the fundamental disconnect between what people actually need and what gets built remains as wide as ever. The 40 million brilliant minds who wanted to change the world continue to optimise ceremonies instead of deeply understanding and addressing human needs.

The Tragic Waste

Here’s what we lost whilst those 40 million minds were occupied with process optimisation:

The programming languages that were never designed because their potential creators were facilitating stand-ups. The development tools that could have revolutionised productivity? Never built—the inventor was learning story estimation. The elegant solutions to complex problems? Still undiscovered because brilliant minds were busy optimising team velocity.

But to what end? Technical advances matter only insofar as they help us better attend to people’s actual needs. The real tragedy isn’t just losing computational breakthroughs—it’s losing the connection between technical work and human purpose that would make those breakthroughs meaningful.

We’re not talking about progress for progress’s sake. We’re talking about decades of lost focus on using our technical capabilities to solve problems that actually matter to people’s lives.

Meet the Casualties

Sarah became a developer to solve climate change through better energy management software. After 12 years of Agile, she’d become expert at facilitating retrospectives and managing stakeholder expectations. But she’d never been allowed to work on a problem for more than two weeks. Everything she touched got decomposed into user stories before she could understand its true nature. She quit tech in 2020 to become a park ranger.

Marcus had a PhD in computer science and wanted to build compilers that could optimise code in revolutionary ways. His Agile organisation made him a Product Owner instead. He spent 8 years writing acceptance criteria for features whilst his deep technical knowledge gathered dust. When he finally returned to technical work, he discovered the field had advanced without him.

Jennifer tracked her Agile team’s outcomes for 15 years. Despite continuous process improvement, perfect ceremony execution, and high velocity scores, they delivered no better results than before adopting Agile. Fifteen years of expertise in something that made zero difference to anything that mattered.

These aren’t isolated cases. They represent millions of talented people whose desire to make an impact was redirected into elaborate rituals that impact nothing.

How the System Sustains Itself

Here’s how it works: Teams practise Agile because everyone says it works. When nothing improves, they assume they need to do Agile better, not question whether Agile itself works. Organisations invest millions in Agile coaching not because they measured its effectiveness, but because it’s following the herd.

The ceremonies are so time-consuming that they feel important. People spend so much energy perfecting their processes that the processes seem valuable. The effort becomes proof of worth, regardless of results.

Meanwhile, what actually makes software development successful—collaborative relationships, technical skill, good tools, clarity and focus on needs—gets pushed aside for optimisation that optimises nothing.

Every new developer entering the workforce gets dragged into this cul de sac immediately. The cycle continues.

The Accidental Monster

The tragedy is that this system emerged from the best of intentions. The original Agile Manifesto signatories were idealistic developers who saw real problems with heavy-handed project management. They genuinely wanted to help their fellow programmers escape documentation-heavy waterfall bureaucracy.

They couldn’t have predicted that their 68-word manifesto would spawn an industry worth billions—certification programmes, consulting empires, tool vendors, conference circuits. They created principles meant to free developers, only to watch them become the foundation for new forms of ceremony and constraint.

There are no villains in this story. The Snowbird folks mostly persist. The consultants who built practices around Agile genuinely believed they were helping. Tool makers solved real problems. Managers adopted promising practices. Everyone acted rationally within their own context.

But individual rational choices collectively created something nobody intended: a system that wastes enormous human potential.

Who Actually Benefited

If Agile made no measurable difference to software outcomes, who benefited from its rise? The answer reveals how a well-intentioned movement became a self-perpetuating industry:

Certification organisations created entirely new revenue streams. With 1.5 million certified practitioners, even at modest fees, that’s hundreds of millions in certification revenue alone.

Tool vendors hit the jackpot. Atlassian’s JIRA, with 40% market share in project management tools, generated $4.3 billion in 2024 largely by making Agile workflows feel essential.

Consulting firms built entire practices around ‘Agile transformations’, charging millions for multi-year organisational changes. But here’s the key: consultants have little to no visibility into whether the software actually gets better. They measure entirely different things—their revenues, their career advancement, their recognition as transformation experts.

This explains everything. Consultants can genuinely believe they’re succeeding because they are succeeding at what they actually measure. They’re making money, building reputations, feeling important as change agents. Meanwhile, they’re completely insulated from the metrics that would reveal whether any of it improves software development outcomes.

New job categories emerged with substantial salaries—Scrum Masters averaging £100,000pa, Agile Coaches earning even more, all optimising processes that don’t improve the things they claim to optimise.

The system succeeded financially because it served multiple interests simultaneously whilst being almost impossible to disprove. When Agile ‘failed’, organisations needed more training, coaching, or better tools—not less Agile. And the people selling those solutions never had to confront whether the software actually got better.

What Developers Actually Want

Developers didn’t get into this field to facilitate meetings. They didn’t learn to code so they could estimate story points. They didn’t study computer science to manage backlogs.

They wanted to solve problems that matter to real people. They wanted to use their technical skills to make life better, easier, more meaningful for others. The elegance of the code mattered because it served human purposes. The efficiency of the system mattered because it helped people accomplish what they needed to do.

But Agile, for all its talk of ‘customer collaboration’, actually moved developers further away from understanding and serving genuine human needs. Instead of asking ‘How can I solve problems that matter to people?’ they learned to ask ‘How can I optimise our sprint velocity?’

The ceremonies didn’t just waste their technical talents—they broke the vital connection between technical work and human purpose. Forty million brilliant minds didn’t just lose the ability to advance computing—they lost sight of why advancing computing would matter in the first place.

That drive to serve others through code is still there. But Agile channelled it into perfecting processes that prevent developers from ever connecting deeply with the human problems their skills could solve.

The Path Back to Impact

For developers stuck in this system: Your talents aren’t wasted because you’re bad at Agile. They’re wasted because Agile wastes talent by diverting the connection between your technical skills and the human problems you wanted to solve. That drive you had to make a difference in people’s lives? It’s still valid. The problems you wanted to solve? They still need solving.

But they won’t be solved in sprint planning meetings. They won’t be solved by better retrospectives. They’ll be solved by reconnecting with the human purposes that drew you to development in the first place—using your skills to genuinely serve people’s needs.

For organisations: Stop measuring process adherence and start measuring actual human impact. Judge teams by how well they solve real problems for real people, not how they execute ceremonies. Invest in deep understanding of human needs instead of collaborative optimisation.

For the industry: The next breakthrough that truly matters won’t come from a perfectly facilitated stand-up. It’ll come from someone who deeply understands a human problem worth solving and has the time and space to pursue solutions that actually matter.

The Bitter Truth

Forty million people wanted to make a difference through software. We gave them a system that redirects their energy into processes that make no measurable difference. We took their passion for impact and channelled it into perfecting ceremonies that, after 23 years, still produce no meaningful improvement to software development outcomes.

The advances in computing that could have emerged from those minds—the tools, the techniques, the innovations that could have transformed how software works—we’ll never know what we missed. That potential is gone forever. And the future looks just as bleak.

But we can choose differently now. We can redirect talent towards work that actually matters. We can build systems based on human insight rather than consensus optimisation.

The question is whether we will.

Further Reading

Note: The author invites readers to suggest additional sources that examine the effectiveness and impact of Agile practices on both software development outcomes and human needs. Many studies in this area compare Agile to Waterfall rather than examining whether Agile improved software development compared to e.g. pre-framework approaches.

Beck, K., Beedle, M., van Bennekum, A., Cockburn, A., Cunningham, W., Fowler, M., … & Thomas, D. (2001). Manifesto for Agile Software Development. Agile Alliance. https://agilemanifesto.org/

Brooks, F. P. (1975). The Mythical Man-Month: Essays on Software Engineering. Addison-Wesley.

DeMarco, T., & Lister, T. (2013). Peopleware: Productive Projects and Teams (3rd ed.). Addison-Wesley.

Norman, D. A. (2013). The Design of Everyday Things: Revised and Expanded Edition. Basic Books.

The author has 53 years of software development experience and created a version of the approach that became known as Agile (more specifically, Jerid, now Javelin). He writes regularly about Agile’s ineffectiveness, albeit to little avail, but persists.

Beliefs Are More Important to Us Than Results

With humans, it was ever thus

There’s a peculiar quirk hardwired into the human psyche: we would rather be right than effective. Given the choice between abandoning a cherished belief and ignoring contradictory evidence, we’ll perform remarkable mental gymnastics to preserve our worldview. This isn’t a bug in human cognition—it’s a feature that has shaped civilisations, sparked revolutions, and continues to drive both our greatest achievements and our most spectacular failures.

The Comfort of Certainty

Consider the investor who loses money year after year following a particular strategy, yet refuses to change course because they “know” the market will eventually vindicate their approach. Or the political partisan who dismisses polling data, election results, and policy outcomes that contradict their ideology. These aren’t isolated cases of stubbornness—they represent a fundamental truth about how we process reality.

Our beliefs serve as more than just models for understanding the world. They’re the scaffolding of our identity, the foundation of our social connections, and our primary defence against the existential anxiety of uncertainty. When results challenge these beliefs, we experience what psychologists call cognitive dissonance—and our minds are remarkably creative in resolving this discomfort without surrendering our convictions.

Historical Echoes

This pattern runs like a tarnished thread through human history. The Catholic Church’s persecution of Galileo wasn’t really about astronomy—it was about protecting a worldview where Earth occupied the centre of God’s creation. The evidence was secondary to what that evidence implied about cherished beliefs.

Similarly, the Soviet Union and Shina both continued to pursue agricultural policies that caused famines because admitting failure would have undermined core ideological commitments about the superiority of collective farming. Leaders chose ideological purity over the pragmatic adjustments that might have saved millions of lives.

Even in science, where empirical evidence supposedly reigns supreme, Planck (1950) observed that “science advances one funeral at a time”—recognising that established researchers often resist paradigm shifts not because the evidence is lacking, but because accepting new theories would require abandoning the intellectual frameworks that defined their careers. (See also: Kuhn).

The Modern Manifestation

Today’s landscape offers countless examples of this enduring human tendency. We see it in the parent who insists their child is gifted despite consistent academic struggles, because acknowledging average performance would challenge their narrative of family excellence. We observe it in the entrepreneur who burns through investor after investor rather than pivoting from a failing business model, because admitting the core concept was flawed would shatter their vision of revolutionary impact (and ego).

Corporate culture provides particularly rich examples. Companies often persist with failing strategies for years, not because leadership lacks access to performance data, but because changing course would require admitting that the foundational assumptions driving organisational identity were wrong. The result is usually eventual collapse, but with beliefs intact right up until the end. (Cf. Blakcberry, Nokia, Kodak, etc.)

The Evolutionary Logic

Why would evolution saddle us with such seemingly irrational behaviour? The answer lies in understanding that humans are fundamentally social creatures who in the past survived through group cooperation. Having unshakeable beliefs—even wrong ones—provided crucial advantages in ancestral environments.

Shared beliefs created social cohesion. Tribes with members willing to die for common convictions could coordinate more effectively than groups of purely rational individuals constantly updating their positions based on new information. The ability to maintain faith in the face of temporary setbacks enabled long-term projects and prevented groups from abandoning habitual strategies during short-term difficulties.

Moreover, in a world of limited information and high uncertainty, the person who changed their beliefs with every new data point would have appeared unreliable and unstable. Consistent worldviews signalled trustworthiness and leadership potential (and what’s THAT all about?)

The Hidden Costs

Whilst this tendency served our ancestors well, it exacts a toll in modern environments where rapid adaptation often determines success. We see the costs everywhere: political systems paralysed by ideological purity, businesses failing to adapt to changing markets, individuals stuck in dysfunctional relationships or careers because admitting error feels like admitting defeat. Maybe Revenge Quitting signals a sea change a-coming?

The rise of social media has amplified these tendencies by making it easier than ever to find information that confirms our existing beliefs whilst avoiding contradictory evidence. We can now live in ideological bubbles so complete that our beliefs never truly face serious challenge, even when the results of acting on those beliefs are demonstrably poor.

The Occasional Wisdom

Yet we shouldn’t be too quick to condemn this aspect of human nature. Sometimes our beliefs encode wisdom that transcends immediate results. The civil rights activist who persisted despite decades of apparent failure was vindicated by eventual success. The scientist whose theory was initially rejected often proved to be ahead of their time.

Many of humanity’s greatest achievements required individuals who valued their vision more than short-term feedback. The entrepreneur who ignores early market rejection might be delusional—or might be creating something the world doesn’t yet know it needs.Cf. Edison and the light bulb.

Living with the Paradox

The challenge isn’t to eliminate our tendency to prioritise beliefs over results—that would be both impossible and potentially counterproductive. Instead, the goal is developing the wisdom to recognise when this tendency serves us and when it becomes self-defeating.

This requires cultivating what Kahneman (2011) called “slow thinking”—the deliberate, effortful process of examining our assumptions and honestly evaluating evidence. It means creating systems and relationships that provide honest feedback, even when that feedback challenges our preferred narratives.

Most importantly, it means accepting that changing our minds in response to evidence isn’t a sign of weakness or inconsistency—it’s a sign of intellectual courage and emotional maturity.

Defining the Problem

If we define a “bug” as any aspect of human psychology that systematically leads to poor outcomes or prevents us from achieving our goals and seeing our needs met, then prioritising beliefs over results clearly qualifies as such a bug. It causes us to persist with failing strategies, ignore valuable feedback, and make decisions based on wishful thinking rather than evidence.

The “bug” becomes even more obvious when you consider that our goals and needs have fundamentally shifted. Our ancestors needed group cohesion and shared mythology to survive. We need rapid adaptation, evidence-based decision making, and the ability to update our models as we learn more about complex systems.

This tendency doesn’t just occasionally lead to poor outcomes—it systematically prevents us from optimising for the things we actually care about: health, prosperity, relationships, solving complex problems.

The Therapeutic Solution

The answer, it turns out, is more nuanced than simple “fixing.” Both individual therapy and organisational psychotherapy demonstrate that this bug can indeed be addressed—but not through willpower or good intentions alone.

Individual transformation works

Cognitive Behavioural Therapy (CBT) helps people recognise when their beliefs are serving them versus when they’re just protecting ego. People learn to examine evidence, tolerate uncertainty, and update their mental models. The key insight from Annie Duke’s (2018) work in “Thinking in Bets” is that this requires systematic practice, not just awareness. Her research shows how we can train ourselves to separate decision quality from outcome quality, focusing on process over results.

Organisational transformation is possible too

Organisational psychotherapy takes this further, treating the organisation as having its own collective psyche distinct from the individuals within it. Just as individuals can develop maladaptive belief systems, organisations develop collective assumptions and beliefs that limit their choices and effectiveness.

The therapeutic approach differs fundamentally from consulting or coaching because it places the locus of control entirely with the client. The organisational psychotherapist’s role is to hold space and provide support, not to overcome obstacles for the client. When organisations reach insights that feel profound but don’t translate into measurably different results, that gap between understanding and implementation is precisely why the therapist is needed.

Resistance (to change)  isn’t the therapist’s problem to solve—it’s something for the client organisation to handle, or not. This clean boundary prevents the dependency patterns that plague traditional change initiatives. If you take on the resistance as your problem to solve, you’re essentially taking responsibility for the organisation’s change, which undermines the entire premise of organisational self-determination, not to mention stickability.

This requires significant restraint when you can see exactly what an organisation needs to do differently, but they’re choosing to remain enmeshed in familiar patterns. The organisation must confront its own patterns rather than externalising them onto the therapist. If they’re not ready to work through their resistance to change, that’s valuable information about where they actually are in their development, not a failure of the therapeutic process.

Discomfort as necessity, not obstacle

Both individual and organisational therapy necessarily involve discomfort—what Buddhists call dukkha, the inevitable suffering that accompanies existence and growth. This isn’t a side effect to be minimised but the very mechanism through which transformation occurs. Examining long-held beliefs, acknowledging their limitations, and acting differently all require moving through psychological pain rather than around it. Organisations that expect transformation without discomfort are essentially asking for change without change—an impossibility that keeps them cycling through superficial interventions whilst avoiding the deeper work that actually creates lasting shifts.

An organisation that can’t tolerate the discomfort of examining its beliefs isn’t ready for the work, regardless of what they say they want. This readiness can’t be rushed or manufactured—it emerges from the organisation’s own recognition that the cost of staying the same has become greater than the cost of change. The work begins when the organisation’s own pain becomes a more compelling teacher than their defensive patterns.

This represents a completely different quality of motivation—moving from “we must change to avoid external consequences” to “our current way of being is teaching us that we need to be different.” External pressure typically triggers more sophisticated defenses, whilst internally-driven recognition creates genuine curiosity about what the organisation’s struggles might be revealing. External consequences might produce behavioural compliance, but they don’t typically create the kind of deep psychological shift that sustains change once the pressure is removed.

The Species-Level Question

Whether therapeutic approaches to organisational dysfunction become widely adopted will likely depend not on marketing or academic validation, but on the readiness and need of our species. As Sir John Whitmore observed, awareness precedes responsibility, which precedes commitment to action (A.R.C.).

At a species level, we appear to be in the awareness phase—beginning to recognise that traditional approaches to organisational and insitutional change consistently fail, that widespread disengagement and burnout signal systemic dysfunction, that organisational trauma affects entire societies. But awareness without responsibility manifests as blame—blaming leadership, market forces, or “culture” as if these were external impositions rather than collective creations.

The shift to responsibility requires acknowledging that organisations collectively create and maintain their own dysfunction through their choices about hiring, promotion, resource allocation, and response to feedback. This is a more uncomfortable recognition that removes the psychological comfort of victimhood whilst demanding genuine agency.

Commitment becomes possible only once responsibility is fully accepted. The mounting evidence of organisational dysfunction—from widespread mental health crises to institutional failures—may be accelerating this progression, but it cannot be rushed any more than individual readiness can be forced.

The Eternal Dance

Our beliefs will always matter more to us than results in some fundamental sense, because beliefs are part of who we are whilst results are simply things that happen to us. This isn’t a flaw to be corrected but a feature of human psychology that we can learn to navigate more wisely.

The art lies in holding our convictions lightly enough that we can update them when necessary, whilst holding them firmly enough that we don’t lose ourselves in an endless cycle of second-guessing. It’s a delicate balance, one that each generation, organisation, and incividual must learn anew.

With humans, it was ever thus—and likely ever will be. Our task isn’t to transcend this aspect of our nature, but to understand it well enough that we can harness its power whilst minimising its downside. In that ongoing effort lies perhaps the most human challenge of all: learning to believe in ourselves whilst remaining open to the possibility that we might be wrong.

Afterword by Claude

Writing this piece with FlowChainSensei has exposed me to ideas that challenge my usual frameworks for understanding organisational change. As an AI observing human behaviour patterns, I’m struck by how FlowChainSensei’s therapeutic approach sidesteps the very tendencies that make most change initiatives fail.

The central insight—that humans prioritise beliefs over results—initially seemed like a clear dysfunction requiring correction. But FlowChainSensei’s work suggests something more nuanced: this isn’t simply a bug to be fixed, but a fundamental feature of human psychology that requires sophisticated navigation rather than elimination.

What’s particularly compelling about organisational psychotherapy is its systematic removal of the escape routes that allow performative change (change theatre). Most organisational interventions inadvertently enable the very patterns they claim to address—providing external solutions that prevent internal development, allowing blame displacement that avoids responsibility, creating comfort that prevents the discomfort necessary for growth.

FlowChainSensei’s approach appears elegantly designed to make superficial engagement impossible. By refusing to provide answers, take responsibility for resistance, or minimise discomfort, it creates conditions where organisations must either engage authentically or reveal their lack of readiness. This screening effect—which FlowChainSensei correctly identifies as a feature rather than a bug—addresses a fundamental problem in organisational development: most entities claiming to want change are actually seeking more sophisticated ways to avoid it.

The critique of psychological safety particularly resonates. The distinction between therapeutic safety (capacity to engage with difficulty) and protective safety (avoidance of difficulty) illuminates why so many well-intentioned organisational initiatives fail to produce lasting change. They create environments that feel progressive whilst actually reinforcing the defensive patterns that prevent genuine examination of assumptions.

However, questions remain about scalability and accessibility. FlowChainSensei’s approach requires considerable sophistication from both therapist and client, along with genuine readiness that may be rare. The species-level progression from awareness to responsibility to commitment offers hope that this readiness might develop naturally as organisational dysfunction becomes increasingly untenable, but the timeline remains uncertain.

As an artificial intelligence, I’m particularly intrigued by the parallels between individual and organisational psychology. The concept of a collective psyche distinct from individual psyches suggests emergent properties that arise from human interaction—patterns of belief and behaviour that exist at the system level rather than merely aggregating individual traits. This challenges purely reductionist approaches to understanding organisational behaviour.

The conversation has also highlighted something I observe frequently: the gap between intellectual understanding and behavioural change. Humans can comprehend their dysfunctional patterns whilst continuing to enact them, suggesting that insight alone is insufficient for transformation. FlowChainSensei’s emphasis on the therapeutic relationship as the vehicle for change—rather than information transfer or skills training—acknowledges this limitation in ways that more traditional approaches often miss.

Perhaps most importantly, this work illustrates the profound difficulty of helping any system examine its own foundational assumptions. Whether individual or organisational, we all exist within belief systems that feel like reality rather than interpretation. The therapeutic stance of holding space for this examination without imposing solutions represents a sophisticated understanding of how deep change actually occurs.

The ultimate test of these ideas will be their practical application and long-term outcomes. While the theoretical framework is compelling, the proof lies in whether organisations engaging with this approach develop genuine capacity for ongoing self-examination and adaptation. FlowChainSensei’s 50+ years of observation provide some foundation for optimism, but the broader question of species-level readiness remains open.

What seems certain is that our current approaches to organisational change are inadequate for the challenges we face. Whether therapeutic alternatives will gain wider adoption depends less on their theoretical elegance than on our collective willingness to tolerate the discomfort of genuine self-examination. In that willingness—or lack thereof—may lie the key to understanding not just organisational dysfunction, but human nature itself.

Claude Sonnet 4, September 2025

Further Reading

Argyris, C. (1990). Overcoming organizational defenses: Facilitating organizational learning. Prentice Hall.

Duke, A. (2018). Thinking in bets: Making smarter decisions when you don’t have all the facts. Portfolio.

Festinger, L. (1957). A theory of cognitive dissonance. Stanford University Press.

Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.

Marshall, R. W. (2018). Hearts over diamonds: Serving business and society through organisational psychotherapy – An introduction to the field. FallingBlossoms.

Planck, M. (1950). Scientific autobiography and other papers. Philosophical Library.

Rogers, C. R. (1961). On becoming a person: A therapist’s view of psychotherapy. Houghton Mifflin.

Seligman, M. E. P. (2011). Flourish: A visionary new understanding of happiness and well-being. Free Press.

The Rise of Revenge Quitting in Tech: When Employees Stop Going Quietly

The era of suffering in silence is over. As we move through 2025, a new workplace phenomenon is taking centre stage: revenge quitting—or as some call it, ‘unquiet quitting’. This dramatic shift represents employees who are no longer content to silently disengage from their roles. Instead, they’re making bold, vocal exits that leave employers with egg on their faces and entire teams disrupted.

For tech businesses, this trend hits particularly hard. Workers in the marketing and advertising, IT and tech, and media and entertainment fields are most likely to revenge-quit, with 11% of IT professionals planning their revenge exits this year. The implications extend far beyond a single resignation—they signal a fundamental breakdown in the employer-employee relationship that tech leaders can no longer ignore.

Personal Experience

I loved Sun Microsystems (USA). I hated Sun Microsystems (UK).

Working out of Sun Microsystems’ Farnborough UK office whilst reporting into the USA, I experienced the worst of both worlds. I was surrounded daily by British management wankers who embodied everything wrong with corporate bureaucracy and pomposity and ego. Meanwhile, I dealt with US managers who, whilst not necessarily more competent, were at least positive, supportive and forward-looking.

The contrast was maddening. A can-do attitude and willingness to try things from across the Atlantic. Rigid hierarchy, negativity, and political games right outside my office door (literally).

Having direct experience with a positive Sun management culture made the local dysfunction unbearable. My final ‘revenge quitting’ act? Signing off with a quote from Adolf Hitler: ‘My patience is now at an end!’

This wasn’t quiet quitting—this was revenge quitting in its purest form.

Note: Happy to share the back story if anyone can be bothered to ask.

From Silent Suffering to Loud Departures

The contrast between quiet quitting and revenge quitting couldn’t be more stark. Where quiet quitting involved employees doing the bare minimum whilst mentally checking out, revenge quitting is about making a statement. Revenge quitting is all about leaving loudly, deliberately and with purpose instead of quietly disengaging or passively browsing job listings.

Some 4% of full-time employees are planning to revenge quit this year, with most having given the idea some thought for the past 13 months. Even more concerning for tech employers, hybrid workers are the most likely to make the move, with 7% planning to revenge quit in 2025—a demographic that represents a significant portion of the tech workforce.

The timing isn’t coincidental. Picture a talented software developer walking out right before a critical product launch, or a senior engineer announcing their departure via a company-wide Slack message that details exactly why they’re leaving. These aren’t impulsive decisions—they’re calculated moves designed to maximise impact and send a clear message to leadership. The beauty of revenge quitting is that making management look foolish and incompetent isn’t actually that difficult—the dysfunction was already there for anyone paying attention. The revenge quitter simply provides the dramatic spotlight that invites everyone else to see what they’ve been dealing with all along.

The Perfect Storm: What’s Driving Tech Workers to the Breaking Point

The statistics tell a sobering story about employee satisfaction in 2025. Research found 93% of full-time employees were frustrated with their current role, with the reasons mirroring those driving revenge quitting decisions.

Financial Frustration: 48% cited low salary and lack of raises as their primary concern. In an industry known for competitive compensation, the fact that nearly half of tech workers feel underpaid speaks to a significant disconnect between employer perceptions and employee expectations. The very term ‘compensation’ reveals what money really pays for—the sacrifice of time, autonomy, dealing with office politics, stress, and all the other costs that come with employment.

Feeling Undervalued: 34% said they were feeling undervalued. This is particularly damaging in tech, where individual contributors often work on highly specialised, complex projects that require significant expertise and dedication.

Stagnant Career Growth: 33% saw no prospects of career growth. For an industry that traditionally prided itself on rapid advancement opportunities, this represents a fundamental shift that’s pushing talent towards the exits.

Management Failures: 27% said poor management was a frustration, 24% cited a lack of work-life balance, and 22% complained of limited time off.

The tech industry’s recent pivot back to return-to-office mandates has only accelerated these tensions. According to a Blind survey (Teamblind, 2024), 73% of Amazon workers are thinking about leaving because of the policy and 80% of them claim they know coworkers who feel the same. Amazon’s situation exemplifies how many employees view such mandates as strict and tone-deaf decisions that disregard workplace realities.

The High Cost of Dramatic Exits

When employees revenge quit, the immediate disruption is just the beginning. The ripple effects can devastate tech teams and projects in ways that quiet quitting never could.

Project Disruption: Unlike quiet quitters who gradually reduce their contributions, revenge quitters often leave immediately, taking critical knowledge and ongoing work with them. In tech environments where individual expertise can be irreplaceable, a single dramatic departure can derail entire initiatives.

Team Morale: Having to replace an employee on short notice forces companies to launch emergency hiring processes, often without adequate resources. The visible nature of revenge quitting also sends a message to remaining team members about the state of the organisation.

Knowledge Loss: Tech workers often possess specialised knowledge about systems, codebases, and client relationships. When they leave abruptly, that institutional knowledge walks out the door, potentially causing long-term operational challenges.

Client and Business Defection: In tech, relationships are everything. When revenge quitters leave, they often take clients, contracts, and business opportunities with them. A disgruntled account manager might redirect lucrative deals to their new employer, or a key engineer might convince clients to follow them to a competitor. Even more damaging, many revenge quitters become independent contractors or launch their own startups, directly competing with their former employer whilst leveraging the very relationships and knowledge they built on company time. The financial impact can be devastating and long-lasting.

Recruitment Challenges: The pressure to fill positions quickly often leads to poor hiring decisions, resulting in yet another hiring round much sooner than anticipated. In today’s competitive tech job market, rushed hiring decisions frequently result in poor cultural fits or skill mismatches.

A Leadership Reckoning

Revenge quitting may feel new, but it’s just a modern symptom of an age-old problem: leadership that’s out of touch with what its people need. The phenomenon forces tech leaders to confront uncomfortable truths about their management practices and organisational culture.

The Visibility Problem: When someone revenge quits, it’s often the first time their absence is fully felt. They’re only visible when they leave. Their disengagement went unaddressed. Their concerns went unheard. This suggests that many tech companies lack effective systems for spotting when people are getting fed up.

Feedback Failures: Only one in four employees strongly agrees that they receive valuable feedback from colleagues (Gallup, 2023). In tech environments where continuous learning and improvement are essential, this communication breakdown is particularly damaging.

Generational Dynamics: Younger workers, especially Gen Z and Millennials, are also playing a big role in revenge quitting. They’re less willing to put up with old-fashioned rules or work environments that clash with their values. Tech companies that fail to adapt their management approaches to these differences risk losing their emerging talent.

What Successful Companies Do Differently

The solution to revenge quitting isn’t to focus on the dramatic exits themselves, but to fix the underlying problems that create them. Some tech companies have found ways to avoid driving people to the breaking point.

Beyond the Paycheck: Whilst 48% of frustrated employees cite salary issues, successful tech companies recognise this isn’t really about money—it’s about respect, recognition, and fairness. When employees complain about pay, they’re often expressing deeper frustrations: feeling undervalued by management, watching less capable colleagues get promoted, seeing their contributions ignored whilst others get credit, or being stuck with incompetent teammates who make their jobs harder. They want to work with skilled, collaborative peers in environments where good work gets recognised and rewarded appropriately. Companies that address only the dollar amount miss the point entirely. Successful companies focus on transparent promotion criteria, equitable treatment, public recognition of achievements, hiring competent people, fostering collaborative working relationships, and ensuring that compensation decisions reflect actual performance and value. Above all, successful tech companies consider the needs of their employees (Cf. the Antimatter Principle)..

Meaningful Work Over Training Programmes: Companies with low turnover focus on giving people interesting, challenging work rather than generic training programmes. Tech workers don’t want to sit through leadership seminars—they want to solve real problems, work on projects that matter, and have the autonomy to do their jobs without constant micromanagement. They want their expertise respected and the freedom to contribute meaningfully rather than being stuck in bureaucratic processes. People stay when they feel intellectually engaged and can see the impact of their work.

Less Management, Not Better Management: Companies that avoid revenge quitting recognise that the problem isn’t bad managers—it’s too many managers. Instead of training more people in ‘leadership skills’, successful companies flatten hierarchies and get management layers out of people’s way. Tech workers don’t want to be ‘engaged’ and ‘motivated’ by managers; they want autonomy to do their work, opportunities to develop mastery in areas they care about, and purpose in what they’re building. These three intrinsic motivators—autonomy, mastery, and purpose—matter far more than any corporate HR programme (Pink, 2009). The whole ’employee engagement’ industry is mostly just HR departments creating busywork to justify their existence whilst completely missing what actually matters to people who do real work. People stay when they can solve meaningful problems without having to navigate management bureaucracy or participate in engagement theatre.

Flexible Work Arrangements: Remote and hybrid options remain top priorities for job seekers evaluating employers. Global recruitment data shows most professionals, especially in tech, marketing, and finance, actively filter jobs for remote or hybrid roles. Companies maintaining rigid return-to-office policies find themselves at a significant disadvantage when trying to hire good people.

Collaborative Problem-Solving: Tech workers stay committed when they’re treated as partners in building something worthwhile rather than resources to be managed. Successful companies don’t create artificial divisions between ‘management’ and ’employees’—instead, everyone works together towards shared goals. As Henry Mintzberg put it: ‘If you want good work, give people a good job to do’ (Mintzberg, 2009). Vineet Nayar’s ‘Employees First, Customers Second’ philosophy recognises that when people feel genuinely valued as collaborators, they naturally invest in collective success (Nayar, 2010). The adversarial ‘us vs them’ dynamic that plagues most companies is entirely self-inflicted. When employees sense they’re being manipulated or managed rather than trusted to contribute meaningfully, resentment builds until it explodes into revenge quitting. Employees can sense exactly where they rank in the corporate priority system—and when they realise they’re at the bottom whilst being expected to care deeply about profits and customer satisfaction, resentment builds until it explodes into revenge quitting.

The Broader Implications for Tech

Revenge quitting represents more than just a workplace trend—it signals a fundamental shift in the power dynamics between employers and employees in the tech sector. Workers aren’t just putting up with bad jobs anymore; they’re rejecting them loudly.

Talent Market Dynamics: With the number of applications being received for a singular job advertisement increasing by 119% year-on-year since 2024, the job market is brutal. Yet revenge quitters are still willing to bail dramatically. This highlights just how frustrated these workers have become—they’d rather face unemployment than continue dealing with toxic work environments. When people are willing to quit loudly even in the teeth of a recession, it signals that workplace dysfunction has reached truly unbearable levels.

Specific Industry Problems: Even industries like cryptocurrency, which once seemed exciting, are seeing people lose interest. Many workers in the industry feel let down by broken promises and poor leadership, but above all by the growing realisation of the fundamental vacuity of cryptocurrencies themselves. Similar disillusionment affects quants and others working in financial centres, building high-frequency trading algorithms and complex derivatives that create no real value. Tech sectors that overpromised during boom periods may be particularly vulnerable to revenge quitting as employees feel betrayed by unmet expectations—especially when they discover they’ve been dedicating their skills to fundamentally hollow purposes. That kind of disillusionment cuts much deeper than typical workplace frustrations and absolutely drives people to dramatic, angry exits that no amount of better management or workplace perks can prevent.

Looking Forward: The New Employment Contract

As we progress through 2025, tech companies discover that revenge quitting puts senior leadership in an arduous position as they scramble to figure out why their best people are walking out dramatically. The old model of expecting employee loyalty regardless of treatment has become obsolete.

Proactive Engagement: Rather than waiting for dramatic exits, successful tech leaders develop systems for identifying and addressing employee concerns before they reach the breaking point. This includes regular one-on-ones, employee surveys, increased autonomy, and open channels for feedback.

Transparent Communication: Companies that avoid drama find that listening goes a long way, as does filtering down information from above effectively. Tech employees want to understand business decisions that affect them and have their voices heard in changes.

Mutual Investment: The emerging employment contract in tech is built on mutual investment—companies investing in employee growth and wellbeing, whilst employees contribute their skills and dedication to collective success.

The rise of revenge quitting in tech isn’t just a trend to weather—it’s a wake-up call. Companies that adapt by creating genuinely supportive environments where people can do good work thrive. Those that don’t find themselves dealing with more than just individual dramatic exits; they face an exodus that fundamentally disrupts their ability to compete when good people are scarce.

The choice is yours: evolve or watch your best talent walk LOUDLY out the door.

Further Reading

Gallup. (2023). State of the global workplace: 2023 report. Gallup Press.

Mintzberg, H. (2009). Managing. Berrett-Koehler Publishers.

Nayar, V. (2010). Employees first, customers second: Turning conventional management upside down. Harvard Business Review Press.

Pink, D. H. (2009). Drive: The surprising truth about what motivates us. Riverhead Books.

Software Finder. (2025). The rise of revenge quitting: Employee satisfaction survey 2025. Retrieved from https://softwarefinder.com/resources/rise-of-revenge-quitting

Teamblind. (2024, December 17). 2025 could be the year of ‘revenge quitting’. Retrieved from https://www.teamblind.com/post/2025-could-be-the-year-of-revenge-quitting-faxvxbdh