Archive

Software development

The Uncomfortable Truth: Why Developer Training Is a Waste of Time

There’s an entire industry built around “improving” software developers. Conferences, workshops, bootcamps, online courses, books, certifications—billions of dollars spent annually on the promise that if we just train developers better, we’ll get better software. It’s time to say what many of us have privately suspected: it’s all just theater.

Here’s why investing in developer training is increasingly pointless, and why organisations would be better served directing those resources elsewhere:

  1. Nobody’s actually interested in improvement
  2. Developers don’t control what actually matters
  3. GenAI has fundamentally changed the equation

Let’s examine each of these uncomfortable truths.

1. Nobody’s Actually Interested in Improvement

Walk into any development team and ask who wants to improve their craft. Hands will shoot up enthusiastically. Now watch what happens over the next six months. The conference budget goes unused. The book club fizzles after two meetings. The internal tech talks attract the same three people every time. The expensive training portal shows a login rate of less than 15%. Personal note: I have seen this myself time and again in client organisations.

The uncomfortable reality is that most developers have found their comfort zone and have little to no genuine interest in moving beyond it. They’ve learned enough to be productive in their current role, and that’s sufficient. The annual performance review might require them to list “professional development goals” but these are box-checking exercises, not genuine aspirations. When developers do seek training, it’s often credential-seeking behavior—resume-building for the next job search, a.k.a. mortgage-driven development, not actual skill development for their current role.

This isn’t unique to software development. In most professions, once practitioners reach competence, the motivation for continued improvement evaporates. The difference is that in software, we’ve created an elaborate fiction that continuous learning is happening when it definitely isn’t. The developers who genuinely seek improvement are self-motivated outliers who would pursue it regardless of organisational investment. They don’t need your training programs; they’re already reading papers, experimenting with new technologies, and pushing boundaries on their own time.

2. Developers Have No Control Over What Actually Matters

Even if a developer emerges from training enlightened about better practices, they return to an environment that makes applying those practices simply impossible. They’ve learned about continuous deployment, but the organisation requires a three-week approval process for production releases. They’ve studied domain-driven design, but the database schema was locked in five years ago by an architecture committee. They’ve embraced test-driven development, but deadlines leave no time for writing tests, and technical debt is an accepted way of life.

The factors that most impact software quality—architecture decisions, technology choices, team structure, deadline pressures, hiring practices, organisational culture, the social dyname—are entirely outside individual developers’ control. These are set by management, architecture boards, or historical accident. Having developers trained in excellent practices but embedded in a dysfunctional system is like teaching someone Olympic swimming techniques and then asking them to compete while chained to a cinder block. (See also: Deming’s Red Bead experiment).

Moreover, the incentive structures in organisations reward maximising bosses’ well being, not e.g. writing maintainable code. Developers quickly learn that the skills that matter for career advancement are political navigation, project visibility, stakeholder management and sucking up—not technical excellence. Training developers in better coding practices while maintaining perverse incentives is simply theater that lets organisations feel good about the charade of “investing in people” while changing absolutely nothing that matters.

3. GenAI Has Fundamentally Changed the Equation

The emergence of generative AI has rendered much of traditional developer training obsolete before it’s even delivered. When Claude or GPT can generate boilerplate code, explain complex algorithms, refactor legacy systems, and even architect solutions, what exactly are we training developers to do? (Maybe AI has a more productive role to play in helping developers maximise their bosses’ well being).

The skills we’ve traditionally taught—memorising syntax, understanding framework details, knowing design patterns, debugging techniques—are precisely the skills that AI handles increasingly well. We’re training developers for skills that are being automated even as we conduct the training. The half-life of technical knowledge has always been short in software, but AI has accelerated this to the point of absurdity. By the time a developer completes a course on a particular framework or methodology, AI tools have already internalized that knowledge and can apply it faster and more consistently than any human (usual AI caveats apply).

The argument that developers need to “understand the fundamentals” to effectively use AI is wishful thinking from an industry trying to justify its existence. Junior developers are already shipping production code by describing requirements to AI and validating outputs. The bottleneck isn’t their understanding—it’s organisational factors like the social dynamic, relationships, requirements clarity and system architecture. Training developers in minutiae that AI handles better is like training mathematicians to use slide rules in the calculator age.

The Hard Truth

The developer training industry persists not because it works, but because it serves organisational needs that have nothing to do with actual improvement. It provides HR with checkboxes for professional development requirements. It gives managers a feel-good initiative to tout in interviews and quarterly reviews. It offers developers a sanctioned way to take a break from the grind. Everyone benefits except the balance sheet.

If organisations genuinely wanted better software, they’d stop pouring money into training programs and start fixing the systems that prevent good work: rigid processes, unrealistic deadlines, toxic relationships, flawed shared assumptions and beliefs, and misaligned incentives. They’d hire fewer developers at higher salaries, giving them the time and autonomy to do quality work. They’d measure success by folks’ needs met rather than velocity and feature count. But that would require admitting that the problem isn’t the developers—it’s everything else. And that’s a far more uncomfortable conversation than simply booking another training workshop.

The Comfortable Lie: Why We Don’t Actually Learn From Our Mistakes

We love a good comeback story. The entrepreneur who failed three times before striking it rich. The developer who learnt from a catastrophic production incident and never made ‘that mistake’ again. We tell these stories because they’re comforting—they suggest that failure has a purpose, that our pain is an investment in wisdom.

But what if this narrative is mostly fiction? What if, in the contexts where we most desperately want to learn from our mistakes—complex, adaptive systems like software development—it’s not just difficult to learn from failure, but actually impossible in any meaningful way?

The Illusion of Causality

Consider a typical software development post-mortem. A service went down at 2 AM. After hours of investigation, the team identifies the culprit: an innocuous configuration change made three days earlier, combined with a gradual memory leak, triggered by an unusual traffic pattern, exacerbated by a caching strategy that seemed fine in testing. The conclusion? ‘We learnt that we need better monitoring for memory issues and more rigorous review of configuration changes.’

But did they really learn anything useful?

The problem is that this wasn’t a simple cause-and-effect situation. It was the intersection of dozens of factors, most of which were present for months or years without issue. The memory leak existed in production for six months. The caching strategy had been in place for two years. The configuration change was reviewed by three senior engineers. None of these factors alone caused the outage—it required their precise combination at that specific moment.

In complex adaptive systems, causality is not linear. There’s no single mistake to point to, no clear lesson to extract. The system is a web of interacting components where small changes can have outsized effects, where the same action can produce wildly different outcomes depending on context, and where the context itself is always shifting.

The Context Problem

Here’s what makes this especially insidious: even if we could perfectly understand what went wrong, that understanding is locked to a specific moment in time. Software systems don’t stand still. By the time we’ve finished our post-mortem, the team composition has changed, two dependencies have been updated, traffic patterns have evolved, and three new features have been deployed. The system we’re analysing no longer exists.

This is why the most confident proclamations—’We’ll never let this happen again’—are often followed by remarkably similar failures. Not because teams are incompetent or negligent, but because they’re trying to apply lessons from System A to System B, when System B only superficially resembles its predecessor. The lesson learnt was ‘don’t deploy configuration changes on Fridays without additional review’, but the next incident happens on a Tuesday with a code change that went through extensive review. Was the lesson wrong? Or was it just irrelevant to the new context?

The Narrative Fallacy

Humans are storytelling machines. When something goes wrong, we instinctively construct a narrative that makes sense of the chaos. We identify villains (the junior developer who made the change), heroes (the senior engineer who diagnosed the issue), and a moral (the importance of code review). These narratives feel true because they’re coherent.

But coherence is not the same as accuracy. In the aftermath of failure, we suffer from hindsight bias—knowing the outcome, we see a clear path from cause to effect that was never actually clear at the time. We say ‘the warning signs were there’ when in reality those same ‘warning signs’ are present all the time without incident. We construct a story that couldn’t have been written before the fact.

This is why war stories in software development are simultaneously compelling and useless. The grizzled veteran who regales you with tales of production disasters is imparting wisdom that feels profound but often amounts to ‘this specific thing went wrong in this specific way in this specific system at this specific time’. And the specifics are rarely defined. The lesson learnt is over-fitted to a single data point.

Emergence and Irreducibility

Complex adaptive systems exhibit emergence—behaviour that arises from the interaction of components but cannot be predicted by analysing those components in isolation – c.f. Synergetics (Buckminster Fuller). Your microservices architecture might work perfectly in testing, under load simulation, and even in production for months. Then one day, a particular sequence of requests, combined with a specific distribution of data across shards, triggers a cascade failure that brings down the entire system.

You can’t ‘learn’ to prevent emergent failures because you can’t predict them. They arise from the system’s complexity itself. Adding more tests, more monitoring, more safeguards—these changes don’t eliminate emergence, they just add new components to the complex system, creating new possibilities for emergent behaviour.

The Adaptation Trap

Here’s the final twist: complex adaptive systems adapt. When you implement a lesson learnt, you’re not just fixing a problem—you’re changing the system. And when the system changes, the behaviours that emerge from it change too.

Add comprehensive monitoring after an outage? Now developers start relying on monitoring as a crutch, writing less defensive code because they know they’ll be alerted to issues. Implement mandatory code review after a bad deployment? Now developers become complacent, assuming that anything that passed review must be safe. The system adapts around your interventions, often in ways that undermine their original purpose.

This isn’t a failure of implementation—it’s a fundamental characteristic of complex adaptive systems. They don’t have stable equilibrium points. Every intervention shifts the system to a new state with its own unique vulnerabilities.

So What Do We Do?

If we can’t learn from our mistakes in any straightforward way, what’s the alternative? Are we doomed to repeat the same failures for ever?

Not quite. The solution is to stop pretending we can extract universal lessons from specific failures and instead focus on building systems that are resilient to the inevitable surprises we can’t predict.

This means designing for graceful degradation rather than preventing all failures. It means building systems that can absorb shocks and recover quickly rather than systems that need to be perfect. It means accepting that production is fundamentally different from any testing environment and that the only way to understand system behaviour is to observe it in production with real users and real data.

It also means being humble. Every post-mortem that ends with ‘we’ve identified the root cause and implemented fixes to prevent this from happening again’ is cosplaying certainty in a domain defined by uncertainty. A more honest conclusion might be: ‘This is what we think happened, given our limited ability to understand complex systems. We’re making some changes that might help, but we acknowledge that we’re also potentially introducing new failure modes we haven’t imagined yet.’

The Productivity of Failure

None of this means that failures are useless. Incidents do provide value—they reveal the system’s boundaries, expose hidden assumptions, and force us to confront our mental models. But the value isn’t in extracting a tidy lesson that we can apply next time. The value is in the ongoing process of engaging with complexity, building intuition through repeated exposure, and developing a mindset that expects surprise rather than seeking certainty.

The developer who has been through multiple production incidents isn’t valuable because they’ve learnt ‘lessons’ they can enumerate. They’re valuable because they’ve internalised a posture of humility, an expectation that systems will fail in ways they didn’t anticipate, and a comfort with operating in conditions of uncertainty.

That’s not the same as learning from mistakes. It’s something both more modest and more useful: developing wisdom about the limits of what we can learn.


The next time you hear someone confidently declare that they’ve learnt from a mistake, especially in a complex domain like software development, be sceptical. Not because they’re lying or incompetent, but because they’re human—and we all want to believe that our suffering has purchased something more substantial than just the experience of suffering. The truth is messier and less satisfying: in complex adaptive systems, the best we can hope for is not wisdom, but the wisdom to know how little wisdom we can extract from any single experience.


Further Reading

Allspaw, J. (2012). Fault injection in production: Making the case for resilience testing. Queue, 10(8), 30-35. https://doi.org/10.1145/2346916.2353017

Dekker, S. (2011). Drift into failure: From hunting broken components to understanding complex systems. Ashgate Publishing.

Dekker, S., & Pruchnicki, S. (2014). Drifting into failure: Theorising the dynamics of disaster incubation. Theoretical Issues in Ergonomics Science, 15(6), 534-544. https://doi.org/10.1080/1463922X.2013.856495

Fischhoff, B. (1975). Hindsight ≠ foresight: The effect of outcome knowledge on judgment under uncertainty. Journal of Experimental Psychology: Human Perception and Performance, 1(3), 288-299. https://doi.org/10.1037/0096-1523.1.3.288

Hollnagel, E., Woods, D. D., & Leveson, N. (Eds.). (2006). Resilience engineering: Concepts and precepts. Ashgate Publishing.

Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.

Kahneman, D., Slovic, P., & Tversky, A. (Eds.). (1982). Judgment under uncertainty: Heuristics and biases. Cambridge University Press.

Leveson, N. G. (2012). Engineering a safer world: Systems thinking applied to safety. MIT Press.

Perrow, C. (1999). Normal accidents: Living with high-risk technologies (Updated ed.). Princeton University Press. (Original work published 1984)

Roese, N. J., & Vohs, K. D. (2012). Hindsight bias. Perspectives on Psychological Science, 7(5), 411-426. https://doi.org/10.1177/1745691612454303

Woods, D. D., & Allspaw, J. (2020). Revealing the critical role of human performance in software. Queue, 18(2), 48-71. https://doi.org/10.1145/3406065.3394867

The Agile Manifesto: Rearranging Deck Chairs While Five Dragons Burn Everything Down

Why the ‘Sound’ Principles Miss the Dragons That Actually Kill Software Projects

Image

The Agile Manifesto isn’t wrong, per se—it’s addressing the wrong problems entirely. And that makes it tragically inadequate.

For over two decades, ‘progressive’ software teams have been meticulously implementing sprints, standups, and retrospectives whilst the real dragons have been systematically destroying their organisations from within. The manifesto’s principles aren’t incorrect; they’re just rearranging deck chairs on the Titanic whilst it sinks around them.

The four values and twelve principles address surface symptoms of dysfunction whilst completely ignoring the deep systemic diseases that kill software projects. It’s treating a patient’s cough whilst missing the lung cancer—technically sound advice that’s spectacularly missing the point.

The Real Dragons: What Actually Destroys Software Teams

Whilst we’ve been optimising sprint ceremonies and customer feedback loops, five ancient dragons have been spectacularly burning down software development and tech business effectiveness:

Dragon : Human Motivation Death Spiral
Dragon : Dysfunctional Relationships That Poison Everything
Dragon : Shared Delusions and Toxic Assumptions
Dragon : The Management Conundrum—Questioning the Entire Edifice
Dragon : Opinioneering—The Ethics of Belief Violated

These aren’t process problems or communication hiccups. They’re existential threats that turn the most well-intentioned agile practices into elaborate theatre whilst real work grinds to a halt. And the manifesto? It tiptoes around these dragons like they don’t exist.

Dragon : The Motivation Apocalypse

‘Individuals and interactions over processes and tools’ sounds inspiring until you realise that your individuals are fundamentally unmotivated to do good work. The manifesto assumes that people care—but what happens when they don’t?

The real productivity killer isn’t bad processes; it’s developers who have mentally checked out because:

  • They’re working on problems they find meaningless
  • Their contributions are invisible or undervalued
  • They have no autonomy over how they solve problems
  • The work provides no sense of mastery or purpose
  • They’re trapped in roles that don’t match their strengths

You can have the most collaborative, customer-focused, change-responsive team in the world, but if your developers are quietly doing the minimum to avoid getting fired, your velocity will crater regardless of your methodology.

The manifesto talks about valuing individuals but offers zero framework for understanding what actually motivates people to do their best work. It’s having a sports philosophy that emphasises teamwork whilst ignoring whether the players actually want to win the game. How do you optimise ‘individuals and interactions’ when your people have checked out?

Dragon : Relationship Toxicity That Spreads Like Cancer

‘Customer collaboration over contract negotiation’ assumes that collaboration is even possible—but what happens when your team relationships are fundamentally dysfunctional?

The real collaboration killers that the manifesto ignores entirely:

  • Trust deficits: When team members assume bad faith in every interaction
  • Ego warfare: When technical discussions become personal attacks on competence
  • Passive aggression: When surface civility masks deep resentment and sabotage
  • Fear: When people are afraid to admit mistakes or ask questions
  • Status games: When helping others succeed feels like personal failure

You hold all the retrospectives you want, but if your team dynamics are toxic, every agile practice becomes a new battlefield. Sprint planning turns into blame assignment. Code reviews become character assassination. Customer feedback becomes ammunition for internal warfare.

The manifesto’s collaboration principles are useless when the fundamental relationships are broken. It’s having marriage counselling techniques for couples who actively hate each other—technically correct advice that misses the deeper poison. How do you collaborate when trust has been destroyed? What good are retrospectives when people are actively sabotaging each other?

Dragon : Shared Delusions That Doom Everything

‘Working software over comprehensive documentation’ sounds pragmatic until you realise your team is operating under completely different assumptions about what ‘working’ means, what the software does, and how success is measured. But what happens when your team shares fundamental delusions about reality?

The productivity apocalypse happens when teams share fundamental delusions:

  • Reality distortion: Believing their product is simpler/better/faster than it actually is
  • Capability myths: Assuming they can deliver impossible timelines with current resources
  • Quality blindness: Thinking ‘works on my machine’ equals production-ready
  • User fiction: Building for imaginary users with imaginary needs
  • Technical debt denial: Pretending that cutting corners won’t compound into disaster

These aren’t communication problems that better customer collaboration can solve—they’re shared cognitive failures that make all collaboration worse. When your entire team believes something that’s factually wrong, more interaction just spreads the delusion faster.

The manifesto assumes that teams accurately assess their situation and respond appropriately. But when their shared mental models are fundamentally broken? All the adaptive planning in the world won’t help if you’re adapting based on fiction.

Dragon : The Management Conundrum—Why the Entire Edifice Is Suspect

‘Responding to change over following a plan’ sounds flexible, but let’s ask the deeper question: Why do we have management at all?

The manifesto takes management as a given and tries to optimise around it. But what if the entire concept of management—people whose job is to direct other people’s work without doing the work themselves—is a fundamental problem?

Consider what management actually does in most software organisations:

  • Creates artificial hierarchies that slow down decision-making
  • Adds communication layers that distort information as it flows up and down
  • Optimises for command and control rather than effectiveness
  • Makes decisions based on PowerPoint and opinion rather than evidence
  • Treats humans like interchangeable resources to be allocated and reallocated

The devastating realisation is that management in software development is pure overhead that actively impedes the work. Managers who:

  • Haven’t written code in years (or ever) making technical decisions
  • Set timelines based on business commitments rather than reality
  • Reorganise teams mid-project because a consultant recommended ‘matrix management’ or some such
  • Measure productivity by story points rather than needs attended to (or met)
  • Translate clear customer needs into incomprehensible requirements documents

What value does this actually add? Why do we have people who don’t understand the work making decisions about the work? What if every management layer is just expensive interference?

The right number of managers for software teams is zero. The entire edifice of management—the org charts, the performance reviews, the resource allocation meetings—is elaborate theatre that gets in the way of people solving problems.

Productive software teams operate more like research labs or craftsman guilds: self-organising groups of experts who coordinate directly with each other and with the people who use their work. No sprint masters, no product owners, no engineering managers—just competent people working together to solve problems.

The manifesto’s principles assume management exists and try to make it less harmful. But they never question whether it has any value at all.

Dragon : Opinioneering—The Ethics of Belief Violated

Here’s the dragon that the manifesto not only ignores but actually enables: the epidemic of strong opinions held without sufficient evidence.

William Kingdon Clifford wrote in 1877 that

‘it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence’
(Clifford, 1877).

In software development, we’ve created an entire culture that violates this ethical principle daily through systematic opinioneering:

Technical Opinioneering: Teams adopting microservices because they’re trendy, not because they solve actual problems. Choosing React over Vue because it ‘feels’ better. Implementing event sourcing because it sounds sophisticated. Strong architectural opinions based on blog posts rather than deep experience with the trade-offs.

Process Opinioneering: Cargo cult agile practices copied from other companies without understanding why they worked there. Daily standups that serve no purpose except ‘that’s what agile teams do.’ Retrospectives that generate the same insights every sprint because the team has strong opinions about process improvement but no evidence about what actually works.

Business Opinioneering: Product decisions based on what the CEO likes rather than what users require. Feature priorities set by whoever argues most passionately rather than data about user behaviour. Strategic technology choices based on industry buzz rather than careful analysis of alternatives.

Cultural Opinioneering: Beliefs about remote work, hiring practices, team structure, and development methodologies based on what sounds right rather than careful observation of results.

The manifesto makes this worse by promoting ‘individuals and interactions over processes and tools’ without any framework for distinguishing between evidence-based insights and opinion-based groupthink. It encourages teams to trust their collective judgement without asking whether that judgement is grounded in sufficient evidence. But what happens when the collective judgement is confidently wrong? How do you distinguish expertise from persuasive ignorance?

When opinioneering dominates, you get teams that are very confident about practices that don’t work, technologies that aren’t suitable, and processes that waste enormous amounts of time. Everyone feels like they’re making thoughtful decisions, but they’re sharing unfounded beliefs dressed up as expertise.

The Deeper Problem: Dysfunctional Shared Assumptions and Beliefs

The five dragons aren’t just symptoms—they’re manifestations of something deeper. Software development organisations operate under shared assumptions and beliefs that make effectiveness impossible, and the Agile Manifesto doesn’t even acknowledge this fundamental layer exists.

My work in Quintessence provides the missing framework for understanding why agile practices fail so consistently. The core insight is that organisational effectiveness is fundamentally a function of collective mindset:

Organisational effectiveness = f(collective mindset)

I demonstrate that every organisation operates within a “memeplex“—a set of interlocking assumptions and beliefs about work, people, and how organisations function. These beliefs reinforce each other so strongly that changing one belief causes the others to tighten their grip to preserve the whole memeplex.

This explains why agile transformations consistently fail. Teams implement new ceremonies whilst maintaining the underlying assumptions that created their problems in the first place. They adopt standups and retrospectives whilst still believing people are motivated, relationships are authentic, management adds value, and software is always the solution.

Consider the dysfunctional assumptions that pervade conventional software development:

About People: Most organisations and their management operate under “Theory X” assumptions—people are naturally lazy, require external motivation, need oversight to be productive, and will shirk responsibility without means to enforce accountability. These beliefs create the very motivation problems they claim to address.

About Relationships: Conventional thinking treats relationships as transactional. Competition drives performance. Hierarchy creates order. Control prevents chaos. Personal connections are “unprofessional.” These assumptions poison the collaboration that agile practices supposedly enable.

About Work: Software is the solution to every problem. Activity indicates value. Utilisation (of eg workers) drives productivity. Efficiency trumps effectiveness. Busyness proves contribution. These beliefs create the delusions that make teams confidently ineffective.

About Management: Complex work requires coordination. Coordination requires hierarchy. Hierarchy requires managers. Managers add value through oversight and direction. These assumptions create the parasitic layers that impede the very work they claim to optimise.

About Knowledge: Strong opinions indicate expertise. Confidence signals competence. Popular practices are best practices. Best practices are desirable. Industry trends predict future success. These beliefs create the opinioneering that replaces evidence with folklore.

Quintessence (Marshall, 2021) shows how “quintessential organisations” operate under completely different assumptions:

  • People find joy in meaningful work and naturally collaborate when conditions support it
  • Relationships based on mutual care and shared purpose are the foundation of effectiveness
  • Work is play when aligned with purpose and human flourishing
  • Management is unnecessary parasitism—people doing the work make the decisions about the work
  • Beliefs must be proportioned to evidence and grounded in serving real human needs

The Agile Manifesto can’t solve problems created by fundamental belief systems because it doesn’t even acknowledge these belief systems exist. It treats symptoms whilst leaving the disease untouched. Teams optimise ceremonies whilst operating under assumptions that guarantee continued dysfunction.

This is why the Qunitessence approach differs so radically from ‘Agile’ approaches. Instead of implementing new practices, quintessential organisations examine their collective assumptions and beliefs. Instead of optimising processes, they transform their collective mindset. Instead of rearranging deck chairs, they address the fundamental reasons the ship is sinking.

The Manifesto’s Tragic Blindness

Here’s what makes the Agile Manifesto so inadequate: it assumes the Five Dragons don’t exist. It offers principles for teams that are motivated, functional, reality-based, self-managing, and evidence-driven—but most software teams are none of these things.

The manifesto treats symptoms whilst ignoring diseases:

  • It optimises collaboration without addressing what makes collaboration impossible
  • It values individuals without confronting what demotivates them
  • It promotes adaptation without recognising what prevents teams from seeing their shared assumptions and beliefs clearly
  • It assumes management adds value rather than questioning whether management has any value at all
  • It encourages collective decision-making without any framework for leveraging evidence-based beliefs

This isn’t a failure of execution—it’s a failure of diagnosis. The manifesto identified the wrong problems and thus prescribed the wrong solutions.

Tom Gilb’s Devastating Assessment: The Manifesto Is Fundamentally Fuzzy

Software engineering pioneer Tom Gilb delivers the most damning critique of the Agile Manifesto: its principles are

‘so fuzzy that I am sure no two people, and no two manifesto signers, understand any one of them identically’

(Gilb, 2005).

This fuzziness isn’t accidental—it’s structural. The manifesto was created by ‘far too many “coders at heart” who negotiated the Manifesto’ without

‘understanding of the notion of delivering measurable and useful stakeholder value’

(Gilb, 2005).

The result is a manifesto that sounds profound but provides no actionable guidance for success in product development.

Gilb’s critique exposes the manifesto’s fundamental flaw: it optimises for developer comfort rather than stakeholder value. The principles read like a programmer’s wish list—less documentation, more flexibility, fewer constraints—rather than a framework for delivering measurable results to people who actually need the software.

This explains why teams can religiously follow agile practices whilst consistently failing to deliver against folks’ needs. The manifesto’s principles are so vague that any team can claim to be following them whilst doing whatever they want. ‘Working software over comprehensive documentation’ means anything you want it to mean. ‘Responding to change over following a plan’ provides zero guidance on how to respond or what changes matter. (Cf. Quantification)

How do you measure success when the principles themselves are unmeasurable? What happens when everyone can be ‘agile’ whilst accomplishing nothing? How do you argue against a methodology that can’t be proven wrong?

The manifesto’s fuzziness enables the very dragons it claims to solve. Opinioneering thrives when principles are too vague to be proven wrong. Management parasitism flourishes when success metrics are unquantified Shared delusions multiply when ‘working software’ has no operational definition.

Gilb’s assessment reveals why the manifesto has persisted despite its irrelevance: it’s comfortable nonsense that threatens no one and demands nothing specific. Teams can feel enlightened whilst accomplishing nothing meaningful for stakeholders.

Stakeholder Value vs. All the Needs of All the Folks That Matter™

Gilb’s critique centres on ‘delivering measurable and useful stakeholder value’—but this phrase itself illuminates a deeper problem with how we think about software development success. ‘Stakeholder value’ sounds corporate and abstract, like something you’d find in a business school textbook or an MBA course (MBA – maybe best avoided – Mintzberg)

What we’re really talking about is simpler, less corporate and more human: serving all the needs of all the Folks That Matter™.

The Folks That Matter aren’t abstract ‘stakeholders’—they’re real people trying to get real things done:

  • The nurse trying to access patient records during a medical emergency
  • The small business owner trying to process payroll before Friday
  • The student trying to submit an assignment before the deadline
  • The elderly person trying to video call their grandchildren
  • The developer trying to understand why the build is broken again

When software fails these people, it doesn’t matter how perfectly agile your process was. When the nurse can’t access records, your retrospectives are irrelevant. When the payroll system crashes, your customer collaboration techniques are meaningless. When the build and smoke takes 30+ minutes, your adaptive planning is useless.

The Agile Manifesto’s developer-centric worldview treats these people as distant abstractions—’users’ and ‘customers’ and ‘stakeholders.’ But they’re not abstractions. They’re the Folks That Matter™, and their needs are the only reason software development exists.

The manifesto’s principles consistently prioritise developer preferences over the requirements of the Folks That Matter™. ‘Working software over comprehensive documentation’ sounds reasonable until the Folks That Matter™ require understanding of how to use the software. ‘Individuals and interactions over processes and tools’ sounds collaborative until the Folks That Matter™ require consistent, reliable results from those interactions.

This isn’t about being anti-developer—it’s about recognising that serving the Folks That Matter™ is the entire point. The manifesto has it backwards: instead of asking ‘How do we make development more comfortable for developers?’ we might ask ‘How do we reliably serve all the requirements of all the Folks That Matter™?’ That question changes everything. It makes motivation obvious—you’re solving real problems for real people. It makes relationship health essential—toxic teams can’t serve others effectively. It makes reality contact mandatory—delusions about quality hurt real people. It makes evidence-based decisions critical—opinions don’t serve the Folks That Matter™; results do.

Most importantly, it makes management’s value proposition clear: Do you help us serve the Folks That Matter™ better, or do you get in the way? If the answer is ‘get in the way,’ then management becomes obviously a dysfunction.

What Actually Addresses the Dragons

If we want to improve software development effectiveness, we address the real dragons:

Address Motivation: Create work that people actually care about. Give developers autonomy, mastery, and purpose. Match people to problems they find meaningful. Make contributions visible and valued.

Heal Toxic Relationships: Build psychological safety where people can be vulnerable about mistakes. Address ego and status games directly. Create systems where helping others succeed feels like personal victory.

Resolve Shared Delusions: Implement feedback loops that invite contact with reality. Measure what actually matters. Create cultures where surfacing uncomfortable truths is rewarded rather than punished.

Transform Management Entirely: Experiment with self-organising teams. Distribute decision-making authority to where expertise actually lives. Eliminate layers between problems and problem-solvers. Measure needs met, not management theatre.

Counter Evidence-Free Beliefs: Institute a culture where strong opinions require strong evidence. Enable and encourage teams to articulate the assumptions behind their practices. Reward changing your mind based on new data. Excise confident ignorance.

These aren’t process improvements or methodology tweaks—they’re organisational transformation efforts that require fundamentally different approaches than the manifesto suggests.

Beyond Agile: Addressing the Real Problems

The future of software development effectiveness isn’t in better sprint planning or more customer feedback. It’s in organisational structures that:

  • Align individual motivation with real needs
  • Create relationships based on trust
  • Enable contact with reality at every level
  • Eliminate management as dysfunctional
  • Ground all beliefs in sufficient evidence

These are the 10x improvements hiding in plain sight—not in our next retrospective, but in our next conversation about why people don’t care about their work. Not in our customer collaboration techniques, but in questioning whether we have managers at all. Not in our planning processes, but in demanding evidence for every strong opinion.

Conclusion: The Problems We Were Addressing All Along

The Agile Manifesto succeeded in solving the surface developer bugbears of 2001: heavyweight processes and excessive documentation. But it completely missed the deeper organisational and human issues that determine whether software development succeeds or fails.

The manifesto’s principles aren’t wrong—they’re just irrelevant to the real challenges. Whilst we’ve been perfecting our agile practices, the dragons of motivation, relationships, shared delusions, management being dysfunctional, and opinioneering have been systematically destroying software development from within.

Is it time to stop optimising team ceremonies and start addressing the real problems? Creating organisations where people are motivated to do great work, relationships enable rather than sabotage collaboration, shared assumptions are grounded in reality, traditional management no longer exists, and beliefs are proportioned to evidence.

But ask yourself: Does your organisation address any of these fundamental issues? Are you optimising ceremonies whilst your dragons run wild? What would happen if you stopped rearranging deck chairs and started questioning why people don’t care about their work?

Because no amount of process optimisation will save a team where people don’t care, can’t trust each other, believe comfortable lies, are managed by people who add negative value, and make decisions based on opinions rather than evidence.

The dragons are real, and they’re winning. Are we finally ready to address them?

Further Reading

Beck, K., Beedle, M., van Bennekum, A., Cockburn, A., Cunningham, W., Fowler, M., … & Thomas, D. (2001). Manifesto for Agile Software Development. Retrieved from https://agilemanifesto.org/

Clifford, W. K. (1877). The ethics of belief. Contemporary Review, 29, 289-309.

Gilb, T. (2005). Competitive Engineering: A Handbook for Systems Engineering, Requirements Engineering, and Software Engineering Using Planguage. Butterworth-Heinemann.

Gilb, T. (2017). How well does the Agile Manifesto align with principles that lead to success in product development? Retrieved from https://www.gilb.com/blog/how-well-does-the-agile-manifesto-align-with-principles-that-lead-to-success-in-product-development

Marshall, R.W. (2021). *Quintessence: An Acme for Software Development Organisations. *[online] leanpub.com. Falling Blossoms (LeanPub). Available at: https://leanpub.com/quintessence/ [Accessed 15 Jun 2022].

Praising the CRC Card

For the developers who never got to hold one

Image

If you started your career after 2010, you probably never encountered a CRC card. If you’re a seasoned developer who came up through Rails tutorials, React bootcamps, or cloud-native microservices, you likely went straight from user stories to code without stopping at index cards. This isn’t your fault. By the time you arrived, the industry had already moved on.

But something was lost in that transition, and it might be valuable for you to experience it.

What You Missed

A CRC card is exactly what it sounds like: a Class-Responsibility-Collaborator design written on a physical index card. One class per card. The class name at the top, its responsibilities listed on the left, and the other classes it works with noted on the right. Simple. Physical. Throwaway.

The technique was developed by Ward Cunningham and Kent Beck in the late 1980s, originally emerging from Cunningham’s work with HyperCard documentation systems. They introduced CRC cards as a teaching tool, but the approach was embraced by practitioners following ideas like Peter Coad’s object-oriented analysis, design, and programming (OOA/D/P) framework. Peter Coad (with Ed Yourdon) wrote about a unified approach to building software that matched how humans naturally think about problems. CRC cards are a tool for translating business domain concepts directly into software design, without getting lost in technical abstractions.

The magic wasn’t in the format—it was in what the format forced you to do.

The Experience

Picture this: You and your teammates sitting around a conference table covered in index cards. Someone suggests a new class. They grab a blank card and write ‘ShoppingCart’ at the top. ‘What should it do?’ someone asks. ‘Add items, remove items, calculate totals, apply promotions,’ comes the reply. Those go in the responsibilities column. ‘What does it need to work with?’ Another pause. ‘It needs Product objects to know what’s being added, a Customer for personalised pricing, maybe a Promotion when discounts apply.’ Those become collaborators.

But here’s where it gets interesting. The card is small. Really small. If you’re writing tiny text to fit more responsibilities, someone notices. If you have fifteen collaborators, the card looks messy. The physical constraint was a design constraint. It whispered: ‘Keep it simple.’

Aside: In Javelin, we also advise keeping all methods to no more than “Five Lines of Code”. And Statements of Purpose to 25 words or less.

The Seduction

Somewhere in the 2000s, we got seduced. UML tools (yak) promised prettier diagrams. Digital whiteboards now offer infinite canvas space. Collaborative software lets us design asynchronously across time zones. We can version control our designs! Track changes! Generate code from diagrams!

We told ourselves this was progress. We retrofitted justifications: ‘Modern systems are too complex for index cards.’ ‘Remote teams need digital tools.’ ‘Physical methods don’t scale.’

But these were lame excuses, not good reasons.

The truth is simpler and more embarrassing: we abandoned CRC cards because they felt primitive. Index cards seemed amateur next to sophisticated UML tools and enterprise architecture platforms. We confused the sophistication of our tools with the sophistication of our thinking.

What We Actually Lost

The constraint was the feature. An index card can’t hold a God class. It can’t accommodate a class with dozens of responsibilities or collaborators. But more importantly, it forced you to think in domain terms, not implementation terms. When you’re limited to an index card, you can’t hide behind technical abstractions like ‘DataProcessor’ or ‘ValidationManager.’ You have to name things that represent actual concepts in the problem domain – things a business person would recognise. The physical limitation forced good design decisions and domain-focused thinking before you had time to rationalise technical complexity.

Throwaway thinking was powerful. When your design lived on index cards, you could literally throw it away and start over. No one was attached to the beautiful diagram they’d spent hours or days perfecting. The design was disposable, which made experimentation safe.

Tactile collaboration was different. There’s something unique about physically moving cards around a table, stacking them, pointing at them, sliding one toward a teammate. Digital tools simulate this poorly. Clicking and dragging isn’t the same as picking up a card and handing it to someone.

Forced focus was valuable. You couldn’t switch to Slack during a CRC card session. You couldn’t zoom in on implementation details. The cards kept you at the right level of abstraction—not so high that you were hand-waving, not so low that you were bikeshedding variable names.

The Ratchet Effect

Here’s what makes this particularly tragic: once the industry moved to digital tools, it became genuinely harder to go back. Try suggesting index cards in a design meeting today. You’ll get polite smiles and concerned looks. Not because the method doesn’t work, but because the ecosystem has moved backwards. The new developers have never seen it done. The tooling assumes digital. The ‘best practices’ articles all recommend software solutions.

We created a ratchet effect where good practices became impossible to maintain not because they were inadequate, but because they felt outdated.

For Those Who Never Got the Chance

If you’re reading this as a developer who never used CRC cards, I want you to know: you were cheated, but not by your own choices. You came into an industry that had already forgotten one of its own most useful practices. You learned the tools that were available when you arrived.

But you also inherited the complexity that came from abandoning constraints. You’ve probably spent hours in architecture meetings where the design sprawled across infinite digital canvases, where classes accumulated responsibilities because the tools could accommodate any amount of complexity, where the ease of adding ‘just one more connection’ led to systems that no one fully understood.

You’ve felt the pain of what we lost when we abandoned the constraint.

A Small Experiment

Next time you’re designing something new, try this: grab some actual index cards. Write one class per card. See how it feels when the physical constraint pushes back against your design. Notice what happens when throwing away a card costs nothing but keeping a complex design visible costs table space.

You might discover something we lost when we got sophisticated.

Do it because CRC cards were actually superior to modern digital tools for early design thinking. We didn’t outgrow them – we abandoned something better for something shinier.

Sometimes the simpler tool was better precisely because it was simpler.

The industry moves fast, and not everything we leave behind should have been abandoned. Some tools die not because they’re inadequate, but because they’re unfashionable. The CRC card was a casualty of progress that wasn’t progressive.

Further Reading

Beck, K., & Cunningham, W. (1989). A laboratory for teaching object-oriented thinking. SIGPLAN Notices, 24(10), 1-6.

Coad, P., & Yourdon, E. (1990). Object-oriented analysis. Yourdon Press.

Coad, P., & Yourdon, E. (1991). Object-oriented design. Yourdon Press.

Coad, P., North, D., & Mayfield, M. (1995). Object-oriented programming. Prentice Hall.

Coad, P., North, D., & Mayfield, M. (1996). Object models: Strategies, patterns, and applications (2nd ed.). Prentice Hall.

Wirfs-Brock, R., & McKean, A. (2003). Object design: Roles, responsibilities, and collaborations. Addison-Wesley.

Secrets of Techhood

A collection of hard-won wisdom from the trenches of technology work

After decades building software, leading teams, and watching organisations succeed and fail, certain patterns emerge. The same mistakes get repeated. The same insights get rediscovered. The same hard-learned lessons get forgotten and relearnt by the next generation.

This collection captures those recurring truths—the kind of wisdom that comes from doing the work, making the mistakes, and living with the consequences. These aren’t theoretical principles from academic papers or management books. They’re the practical insights that emerge when life meets reality, when teams face real deadlines, and when software encounters actual users.

The insights come from diverse sources: legendary systems thinkers like W.E. Deming and Russell Ackoff, software pioneers, quality experts, organisational psychologists, and practising technologists who’ve shared their hard-earned wisdom. What unites them is practical relevance—each aphorism addresses real challenges that technology professionals face daily.

Use this collection as a reference, not a rulebook. Read through it occasionally. Return to specific aphorisms when facing related challenges. Share relevant insights with colleagues wrestling with similar problems. Most importantly, remember that wisdom without application is just interesting trivia.

The technology changes constantly, but the fundamental challenges of building systems, working with people, and delivering value remain remarkably consistent. These truths transcend programming languages, frameworks, and methodologies. They’re about the deeper patterns of how good technology work gets done.

Invitarion: I’d love for readers to suggest their own aphorisms for inclusion in this collection. Please use the comments, below.

The Aphorisms

It’s called software for a reason.

The ‘soft’ in software reflects its fundamental nature as something malleable, changeable, and adaptive. Unlike hardware, which is fixed once manufactured, software exists to be modified, updated, and evolved. This flexibility is both its greatest strength and its greatest challenge. The ability to change software easily leads to constant tweaking, feature creep, and the temptation to fix everything immediately. Yet this same flexibility allows software to grow with changing needs, adapt to new requirements, and evolve beyond its original purpose.

Learning hasn’t happened until behaviour has changed.

Consuming tutorials, reading documentation, and attending conferences is information absorption. True learning in tech occurs when concepts become internalised so deeply that they alter how problems are approached. Data analysis learning is complete when questioning data quality and looking for outliers becomes instinctive. Project management mastery emerges when breaking large problems into smaller, manageable pieces happens automatically.

Change hasn’t happened unless we feel uncomfortable.

Real change, whether learning a new technology, adopting different processes, or transforming how teams work, requires stepping outside comfort zones. If a supposed change feels easy and natural, you’re just doing familiar things with new labels. Genuine transformation creates tension between old habits and new ways of working.

The work you create today is a letter to your future self—create with compassion.

Six months later, returning to a project with fresh eyes and foggy memory is jarring. The folder structure that seems obvious today becomes a confusing maze tomorrow. The clever workflow that feels brilliant now frustrates that future self. Creating work as if explaining thought processes to a colleague makes sense—because that’s what’s happening across time.

Documentation is love made visible.

Good documentation serves as an act of kindness towards everyone who will interact with the work, including one’s future self. It bridges current understanding and future confusion. When processes are documented, decisions explained, or clear instructions written, there’s an implicit message: ‘I care about your experience with this work.’ Documentation transforms personal knowledge into shared resources.

Perfect is the enemy of shipped, and also the enemy of good enough.

The pursuit of perfection creates endless cycles of refinement that prevent delivery of value. Hours spent polishing presentations that already communicate effectively could address new problems or serve unmet needs. Yet shipping imperfection carries risks too—reputation damage, user frustration, or technical debt. Sometimes ‘done’ creates more value than ‘perfect’, especially when perfect never arrives.

Every problem is a feature request from reality.

Issues reveal themselves as more than annoying interruptions—they’re signals about unconsidered edge cases, incorrect assumptions, or untested scenarios. Each problem illuminates gaps between mental models of how things work and how they actually work in practice. When users struggle with an interface, they’ve submitted an unspoken feature request for better design.

The best problem-solving tool is a good night’s sleep.

The brain processes and consolidates information during sleep, revealing solutions that remained hidden during conscious effort. Challenges that consume hours of focused attention resolve themselves in minutes after proper rest. Sleep deprivation clouds judgement, reduces pattern recognition, and obscures obvious solutions.

Premature optimisation is the root of all evil, but so is premature pessimisation.

Whilst rushing to optimise before understanding the real bottlenecks is wasteful, it’s equally dangerous to create obviously inefficient processes under the banner of ‘we’ll fix it later.’ Don’t spend days perfecting workflows that run once, but also don’t use manual processes when simple automation would work just as well.

Your first solution is rarely your best solution, but it’s always better than no solution.

The pressure to find the perfect approach immediately creates analysis paralysis. First attempts prove naïve, inefficient, or overly complex, yet they provide crucial starting points for understanding problem spaces. Working solutions enable iteration, refinement, and improvement.

A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work.

John Gall’s Law captures a fundamental truth about how robust systems come into being. They aren’t architected in their final form—they grow organically from working foundations. The most successful large systems started as simple, functional prototypes that were gradually extended.

The hardest parts of tech work are naming things, managing dependencies, and timing coordination.

These three fundamental challenges plague every technology professional daily. Naming things well requires understanding not just what something does, but how it fits into the larger system and how others will think about it. Managing dependencies is difficult because it requires reasoning about relationships, priorities, and changes across multiple systems or teams.

Feedback is not personal criticism—it’s collaborative improvement.

When colleagues suggest changes to work, they’re investing their time and attention in making the outcome better. They’re sharing their knowledge, preventing future issues, and helping with professional growth. Good feedback is an act of collaboration, not criticism.

People will forgive not meeting their needs immediately, but not ignoring them.

Users, stakeholders, and colleagues understand that resources are limited and solutions take time. They accept that their need might not be the highest priority or that the perfect solution requires careful consideration. What damages relationships is complete neglect—not making any effort, not showing any care, not demonstrating that their concern matters. People can wait for solutions when they see genuine attention being paid to their situation. The difference between delayed action and wilful neglect determines whether trust grows or erodes. Attending to needs doesn’t require immediate solutions, but it does require genuine care and effort.

How you pay attention matters more than what you pay attention to.

The quality of attention transforms both the observer and the observed. Distracted attention whilst multitasking sends a clear message about priorities and respect. Focused, present attention—even for brief moments—creates connection and understanding. When reviewing code, listening with genuine curiosity rather than hunting for faults leads to better discussions and learning. When meeting with stakeholders, being fully present rather than mentally composing responses changes the entire dynamic. The manner of attention—rushed or patient, judgmental or curious, distracted or focused—shapes outcomes more than the subject receiving that attention.

Caring attention helps things grow.

Systems, teams, and individuals flourish under thoughtful observation and nurturing focus. When attention comes with genuine care—wanting to understand, support, and improve rather than judge or control—it creates conditions for development. Code improves faster when reviewed with constructive intent rather than fault-finding. Team members develop more rapidly when mistakes are examined with curiosity rather than blame. Projects evolve more successfully when monitored with supportive interest rather than suspicious oversight. The difference between surveillance and stewardship lies in the intent behind the attention.

The best work is work you don’t have to do.

Every process created needs to be maintained, updated, and explained. Before building something from scratch, considering whether an existing tool, service, or approach already solves the problem pays off. The work not done can’t break, doesn’t need updates, and never becomes technical debt.

Every expert was once a beginner who refused to give up.

Experience and expertise aren’t innate talents—they’re the result of persistence through challenges, failures, and frustrations. The senior professionals admired today weren’t born knowing best practices or troubleshooting techniques. They got there by continuing to learn, experiment, and problem-solve even when things felt impossibly difficult.

Your ego is not your work.

When others critique work, they engage with output rather than character. Suggestions for improvement, identified issues, or questioned decisions focus on the work itself, not personal worth. Work can be improved, revised, or completely replaced without diminishing professional value.

Testing is not about proving a solution works—it’s about showing where the work is at.

Good testing reveals current status rather than validating perfection. Tests illuminate what’s functioning, what’s broken, what’s missing, and what’s uncertain. Rather than serving as a stamp of approval, testing provides visibility into the actual state of systems, processes, or solutions.

The most expensive work to maintain is work that almost functions.

Work that fails obviously and consistently is easy to diagnose and fix. Work that functions most of the time but fails unpredictably is a maintenance nightmare. These intermittent issues are hard to reproduce, difficult to diagnose, and mask deeper systematic problems.

Changing things without understanding them is just rearranging the furniture.

When modifying systems, processes, or designs without adequate understanding of how they currently work, there’s no way to verify that essential functionality has been preserved. Understanding serves as a foundation for meaningful change, giving confidence that modifications improve things rather than just moving problems around.

Version control is time travel for the cautious.

Document management systems and change tracking tools let experimentation happen boldly because previous states can always be restored if things go wrong. They remove the fear of making changes because nothing is ever truly lost. Radical reorganisations, experimental approaches, or risky optimisations become possible knowing that reversion to the last known good state remains an option.

Any organisation that designs a system will produce a design whose structure is a copy of the organisation’s communication structure.

Conway’s Law reveals why so many software architectures mirror the org charts of the companies that built them. If you have separate teams for frontend, backend, and database work, you’ll end up with a system that reflects those boundaries—even when a different architecture would serve users better.

Question your assumptions before you question your code.

Most problems stem not from implementation errors but from incorrect assumptions about how systems work, what users will do, or how data will behave. Assumptions about network reliability, that users will provide valid input, that third-party services will always respond, or that files will always exist where expected become embedded in work as implicit requirements that aren’t tested or documented.

The problem is always in the last place you look because you stop looking after you find it.

This humorous observation about troubleshooting reflects a deeper truth about problem-solving methodology. Issues are searched for in order of assumptions about likelihood, starting with the most obvious causes. When problems are found, searching naturally stops, making it definitionally the ‘last’ place looked.

Your production environment is not your testing environment, no matter how much you pretend it is.

Despite best intentions, many teams end up using live systems as their primary testing ground through ‘quick updates,’ ‘minor changes,’ and ‘simple fixes.’ Production environments have different data, different usage patterns, different dependencies, and different failure modes than development or testing environments.

Every ‘temporary solution’ becomes a permanent fixture.

What starts as a quick workaround becomes enshrined as permanent process. The ‘temporary fix’ implemented under deadline pressure becomes the foundation that other work builds upon. Before long, quick hacks become load-bearing infrastructure that’s too risky to change.

The work that breaks at the worst moment is always the work you trusted most.

Murphy’s Law applies strongly to technology work. The elegant, well-tested system that generates pride will find a way to fail spectacularly at the worst possible moment. Meanwhile, the hacky workaround that needed fixing will run flawlessly for years. Confidence leads to complacency, which creates blind spots where unexpected failures hide.

Always double-check the obvious.

Paranoia is a virtue in technology work. Even when certain about how a system works, validating assumptions, checking inputs, and considering edge cases remains worthwhile. Systems change, dependencies update, and assumptions that were true yesterday are not true today.

Notes are not apologies for messy work—they’re explanations for necessary complexity.

Good documentation doesn’t explain what the work does but why it does it. It explains business logic, documents assumptions, clarifies non-obvious decisions, and provides context that can’t be expressed in the work itself. Notes that say ‘process these files’ are useless, but notes that say ‘Account for timezone differences in date processing’ add valuable context.

The fastest process is the process that never runs.

Performance optimisation focuses on making existing processes run faster, but the biggest efficiency gains come from avoiding work entirely. Can expensive calculations be cached? Can results be precomputed? Can unnecessary steps be eliminated? The most elegant solution is recognising that certain processes don’t need to execute at all under common conditions.

The systems that people work in account for 95 per cent of performance.

W.E. Deming’s insight: Most of what we attribute to individual talent or effort is determined by the environment, processes, and systems within which people operate. If the vast majority of performance comes from the system, then improving the system yields far greater returns than trying to improve individuals within a flawed system.

Individual talent is the 5 per cent that operates within the 95 per cent that is system.

Deming’s ratio explains why hiring ‘rock stars’ to fix broken systems fails, whilst putting competent people in well-designed systems consistently produces exceptional results. A brilliant programmer in a dysfunctional organisation will struggle, whilst an average programmer in a good system can accomplish remarkable things. The 5% individual contribution becomes meaningful only when the 95% system component enables and amplifies it.

Unless you change the way you think, your system will not change and therefore, its performance won’t change either.

John Seddon’s insight cuts to the heart of why so many improvement initiatives fail. Teams implement new processes, adopt new tools, or reorganise structures whilst maintaining the same underlying assumptions and beliefs that created the original problems. Real change requires examining and challenging the mental models, assumptions, and beliefs that shape how work gets designed and executed.

People are not our greatest asset—it’s the relationships between people that are our greatest asset.

Individual talent matters, but the connections, communication patterns, and collaborative dynamics between team members determine success more than any single person’s capabilities. The most effective teams aren’t composed of the most talented individuals, but of people who work well together and amplify each other’s strengths.

A bad system will beat a good person every time.

Individual competence and good intentions can’t overcome fundamentally flawed processes or organisational structures. When systems create conflicting incentives, unclear expectations, or impossible constraints, even capable people struggle to succeed. Good people in bad systems become frustrated, whilst average people in good systems accomplish remarkable things.

You can’t inspect quality in—it has to be built in.

Quality comes from improvement of the production process, not from inspection. Good systems prevent defects rather than just catching them. The most effective quality assurance focuses on improving how work gets done, not on finding problems after they occur.

The righter we do the wrong thing, the wronger we become. Therefore, it is better to do the right thing wrong than the wrong thing right.

Russell Ackoff’s insight highlights that effectiveness (doing the right things) must come before efficiency (doing things right). Becoming more efficient at the wrong activities compounds the problem. Focus first on whether you should be doing something before worrying about how well you do it.

Efficiency is doing things right; effectiveness is doing the right things.

Peter Drucker’s classic distinction reminds us that there’s little value in optimising processes that shouldn’t exist in the first place. The greatest risk for managers is the confusion between effectiveness and efficiency. There is nothing quite so useless as doing with great efficiency what should not be done at all.

The constraint determines the pace of the entire system.

In any process or organisation, one bottleneck limits overall performance regardless of how fast other parts operate. Optimising non-constraint areas looks productive but doesn’t improve system output. Finding and focusing improvement efforts on the true constraints provides the greatest leverage for overall performance gains.

Innovation always demands we change the rules.

When we adopt new approaches that diminish limitations, we must also change the rules that were created to work around those old limitations. Otherwise, we get no benefits from our innovations. As long as we obey the old rules—the rules we originally invented to bypass the limitations of the old system—we continue to behave as if the old limitations still exist.

In God we trust; all others bring data.

Decisions improve when based on evidence rather than assumptions, but data alone doesn’t guarantee good choices. Numbers mislead as easily as they illuminate, especially when they reflect measurement artefacts rather than underlying realities. Data provides a foundation for discussion and decision-making, but wisdom comes from interpreting that data within context.

Every bug you ship becomes ten support tickets.

John Seddon’s ‘failure demand’ reveals how poor quality creates exponential work. When you don’t get something right the first time, you generate cascading demand: customer complaints, support calls, bug reports, patches, and rework. It’s always more expensive to fix things after customers find them than to prevent problems in the first place.

Technical debt is like financial debt—a little helps you move fast, but compound interest will kill you.

Strategic shortcuts can accelerate delivery when managed carefully. Taking on some technical debt to meet a critical deadline or test market assumptions is valuable. But unmanaged technical debt accumulates interest through increased maintenance costs, slower feature development, and system brittleness.

The best code is no code at all.

Every line of code written creates obligations—debugging, maintenance, documentation, and ongoing support. Before building something new, the most valuable question is whether the problem needs solving at all, or whether existing solutions already address the need adequately. Code that doesn’t exist can’t have bugs, doesn’t require updates, and never becomes technical debt.

Start without IT. The first design has to be manual.

Before considering software-enabled automation, first come up with manual solutions using simple physical means, like pin-boards, T-cards and spreadsheets. This helps clarify what actually needs to be automated and ensures you understand the process before attempting to digitise it.

Simple can be harder than complex—you have to work hard to get your thinking clean.

Achieving simplicity requires understanding problems deeply enough to eliminate everything non-essential. Complexity masks incomplete understanding or unwillingness to make difficult choices about what matters most. Simple solutions demand rigorous thinking about core requirements, user needs, and essential functionality.

Design is how it works, not how it looks.

Visual aesthetics matter, but they serve the deeper purpose of supporting functionality and user experience. Good design makes complex systems feel intuitive, reduces cognitive load, and guides users towards successful outcomes. When appearance conflicts with usability, prioritising function over form creates better long-term value.

Saying no is more important than saying yes.

Focus emerges from deliberately choosing what not to do rather than just deciding what to pursue. Every opportunity accepted means other opportunities foregone, and attention is always limited. Organisations that try to do everything accomplish nothing well. Strategic success comes from identifying the few things that matter most and declining everything else.

Organisational effectiveness = f(collective mindset).

The effectiveness of any organisation is determined by the shared assumptions, beliefs, and mental models of the people within it. Technical solutions, processes, and structures matter, but they’re all constrained by the underlying collective mindset that shapes how people think about and approach their work.

Technologists who dismiss psychology as ‘soft science’ are ignoring the hardest variables in their systems.

Technical professionals gravitate toward problems with clear inputs, logical processes, and predictable outputs. Psychology feels messy and unquantifiable by comparison. But the human elements—motivation, communication patterns, cognitive biases, team dynamics—determine whether technical solutions succeed or fail in practice.

Code review isn’t about finding bugs—it’s about sharing knowledge.

Whilst catching defects has value, the real benefit of code reviews lies in knowledge transfer, spreading understanding of the codebase, sharing different approaches to solving problems, and maintaining consistency in coding standards. Good reviews help prevent knowledge silos and mentor junior developers.

All estimates are wrong. Some are useful.

Software estimates are educated guesses based on current understanding, not commitments or predictions. They’re useful for planning, prioritising, and making resource allocation decisions, but they shouldn’t be treated as contracts or promises. Use them as tools for discussion and planning, and remember that their primary value is in helping make better decisions.

Security is not a feature you add—it’s a discipline you practise.

Security can’t be bolted on after the fact through penetration testing or security audits alone. It must be considered throughout design, development, and deployment. Security is about creating systems that are resistant to attack by design, not just finding and fixing vulnerabilities after they’re built.

Your users will break your software in ways you never imagined—and they’re doing you a favour.

Real users in real environments expose edge cases, assumptions, and failure modes that controlled testing misses. They use your software in contexts you never considered, with data you never anticipated, and in combinations you never tested. Each break reveals gaps in your mental model of how the system should work.

Refactor before you need to, not when you have to.

Continuous small refactoring prevents code from becoming unmaintainable. When you’re forced to refactor, you’re already behind and under pressure, which leads to rushed decisions and compromised quality. Build refactoring into your regular development rhythm, not as crisis response.

If you can’t measure it breaking, you can’t fix it reliably.

Systems need observable failure modes through monitoring, logging, and alerting. Without visibility into system health and failure patterns, you’re debugging blindly and fixing symptoms rather than root causes. Good monitoring tells you not just that something broke, but why it broke and how to prevent it from happening again.

Knowledge sharing is not cheating—it’s collaborative intelligence.

Technology work has always been collaborative, and online communities represent the democratisation of knowledge sharing. Looking up solutions to common problems isn’t cheating—it’s efficient use of collective wisdom. The key is understanding the solutions found rather than blindly copying them.

Error messages are breadcrumbs, not accusations.

Error messages aren’t personal attacks on competence—they’re valuable clues about what went wrong and how to fix it. Good error messages tell a story about what the system expected versus what it encountered. Learning to read error messages carefully and use troubleshooting data effectively is a crucial skill.

Collaboration is not about sharing tasks—it’s about sharing knowledge.

The value of collaborative work isn’t in the mechanical division of labour—it’s in the knowledge transfer, real-time feedback, and shared problem-solving that occurs. When professionals collaborate effectively, they share different perspectives, catch each other’s mistakes, and learn from each other’s approaches.

The most important skill in technology is knowing when to start over.

Abandoning problematic systems or processes and starting fresh proves more efficient than continuing to patch existing work. When complexity accumulates beyond economical improvement, when foundational assumptions prove flawed, or when requirements shift dramatically, fresh starts offer better paths forward.

Remember: Every expert was once a disaster who kept learning.

Further Reading

Ackoff, R. L. (1999). Re-creating the corporation: A design of organizations for the 21st century. Oxford University Press.

Conway, M. E. (1968). How do committees invent? Datamation, 14(4), 28-31.

Deming, W. E. (2000). Out of the crisis. MIT Press. (Original work published 1986)

Drucker, P. F. (2006). The effective executive: The definitive guide to getting the right things done. HarperBusiness. (Original work published 1967)

Gall, J. (2002). The systems bible: The beginner’s guide to systems large and small (3rd ed.). General Systemantics Press. (Original work published 1975)

Marshall, R. W. (2021). Quintessence: An acme for software development organisations. Falling Blossoms.

Seddon, J. (2019). Beyond command and control. Vanguard Consulting.

How We Broke 40 Million Developers: An Agile Pioneer’s Lament

I weep endless tears for all the folks who have poured so much into such a fruitless pursuit.

Here’s the cruelest irony of our industry: developers become developers because they want to make a difference. They want to solve problems that matter. They want to build things that change how people work and live. They’re drawn to the craft because it has power—real power to transform the world.

And then we gave them Agile.

After 53 years in software development—including working on the practices that became Agile back in 1994—I’ve watched multiple generations of brilliant people get their desire for impact redirected into perfecting processes that make no measurable difference whatsoever.

The Numbers Are Staggering

There are something like 30-45 million software developers worldwide today. Around 90% of them claim to practise Agile in some form. That’s 40 million people who wanted to change the world, now spending their days in fruitless stand-ups and retrospectives.

Forty million brilliant minds. All trying to make an impact. All following processes that prevent them from making any impact at all.

What They Actually Do All Day

Instead of solving hard problems, they estimate story points. Instead of designing elegant systems, they break everything into two-week chunks. Instead of thinking deeply about what users actually need, they manage backlogs of features nobody asked for.

They spend hours in planning meetings for work that gets thrown away. They refine processes that don’t improve outcomes. They attend retrospectives where teams discuss why nothing meaningful changed, then agree to keep doing the same things.

The very people who could advance computing spend their time perfecting ceremonies that have made zero measurable difference to software quality after 23 years of widespread use.

The Evidence of Irrelevance

Here’s what’s particularly damning: every study claiming Agile ‘works’ only compares it to ‘Waterfall’, not to how software was actually built before these formal processes took over. Before the 1990s, most software was built without elaborate frameworks—programmers talked to users, wrote code, fixed bugs, and shipped products.

But here’s the deeper issue: better software was never the aim. The actual aim was better attending to folks’ needs. So measuring software quality improvements misses the point entirely.

Yet after more than 20 years of Agile domination, are we better at attending to people’s needs? Are users getting products and services that genuinely serve them better? Are the real human needs being attended to more effectively?

The evidence suggests not. We have more process, more ceremony, more optimisation of team interactions—but the fundamental disconnect between what people actually need and what gets built remains as wide as ever. The 40 million brilliant minds who wanted to change the world continue to optimise ceremonies instead of deeply understanding and addressing human needs.

The Tragic Waste

Here’s what we lost whilst those 40 million minds were occupied with process optimisation:

The programming languages that were never designed because their potential creators were facilitating stand-ups. The development tools that could have revolutionised productivity? Never built—the inventor was learning story estimation. The elegant solutions to complex problems? Still undiscovered because brilliant minds were busy optimising team velocity.

But to what end? Technical advances matter only insofar as they help us better attend to people’s actual needs. The real tragedy isn’t just losing computational breakthroughs—it’s losing the connection between technical work and human purpose that would make those breakthroughs meaningful.

We’re not talking about progress for progress’s sake. We’re talking about decades of lost focus on using our technical capabilities to solve problems that actually matter to people’s lives.

Meet the Casualties

Sarah became a developer to solve climate change through better energy management software. After 12 years of Agile, she’d become expert at facilitating retrospectives and managing stakeholder expectations. But she’d never been allowed to work on a problem for more than two weeks. Everything she touched got decomposed into user stories before she could understand its true nature. She quit tech in 2020 to become a park ranger.

Marcus had a PhD in computer science and wanted to build compilers that could optimise code in revolutionary ways. His Agile organisation made him a Product Owner instead. He spent 8 years writing acceptance criteria for features whilst his deep technical knowledge gathered dust. When he finally returned to technical work, he discovered the field had advanced without him.

Jennifer tracked her Agile team’s outcomes for 15 years. Despite continuous process improvement, perfect ceremony execution, and high velocity scores, they delivered no better results than before adopting Agile. Fifteen years of expertise in something that made zero difference to anything that mattered.

These aren’t isolated cases. They represent millions of talented people whose desire to make an impact was redirected into elaborate rituals that impact nothing.

How the System Sustains Itself

Here’s how it works: Teams practise Agile because everyone says it works. When nothing improves, they assume they need to do Agile better, not question whether Agile itself works. Organisations invest millions in Agile coaching not because they measured its effectiveness, but because it’s following the herd.

The ceremonies are so time-consuming that they feel important. People spend so much energy perfecting their processes that the processes seem valuable. The effort becomes proof of worth, regardless of results.

Meanwhile, what actually makes software development successful—collaborative relationships, technical skill, good tools, clarity and focus on needs—gets pushed aside for optimisation that optimises nothing.

Every new developer entering the workforce gets dragged into this cul de sac immediately. The cycle continues.

The Accidental Monster

The tragedy is that this system emerged from the best of intentions. The original Agile Manifesto signatories were idealistic developers who saw real problems with heavy-handed project management. They genuinely wanted to help their fellow programmers escape documentation-heavy waterfall bureaucracy.

They couldn’t have predicted that their 68-word manifesto would spawn an industry worth billions—certification programmes, consulting empires, tool vendors, conference circuits. They created principles meant to free developers, only to watch them become the foundation for new forms of ceremony and constraint.

There are no villains in this story. The Snowbird folks mostly persist. The consultants who built practices around Agile genuinely believed they were helping. Tool makers solved real problems. Managers adopted promising practices. Everyone acted rationally within their own context.

But individual rational choices collectively created something nobody intended: a system that wastes enormous human potential.

Who Actually Benefited

If Agile made no measurable difference to software outcomes, who benefited from its rise? The answer reveals how a well-intentioned movement became a self-perpetuating industry:

Certification organisations created entirely new revenue streams. With 1.5 million certified practitioners, even at modest fees, that’s hundreds of millions in certification revenue alone.

Tool vendors hit the jackpot. Atlassian’s JIRA, with 40% market share in project management tools, generated $4.3 billion in 2024 largely by making Agile workflows feel essential.

Consulting firms built entire practices around ‘Agile transformations’, charging millions for multi-year organisational changes. But here’s the key: consultants have little to no visibility into whether the software actually gets better. They measure entirely different things—their revenues, their career advancement, their recognition as transformation experts.

This explains everything. Consultants can genuinely believe they’re succeeding because they are succeeding at what they actually measure. They’re making money, building reputations, feeling important as change agents. Meanwhile, they’re completely insulated from the metrics that would reveal whether any of it improves software development outcomes.

New job categories emerged with substantial salaries—Scrum Masters averaging £100,000pa, Agile Coaches earning even more, all optimising processes that don’t improve the things they claim to optimise.

The system succeeded financially because it served multiple interests simultaneously whilst being almost impossible to disprove. When Agile ‘failed’, organisations needed more training, coaching, or better tools—not less Agile. And the people selling those solutions never had to confront whether the software actually got better.

What Developers Actually Want

Developers didn’t get into this field to facilitate meetings. They didn’t learn to code so they could estimate story points. They didn’t study computer science to manage backlogs.

They wanted to solve problems that matter to real people. They wanted to use their technical skills to make life better, easier, more meaningful for others. The elegance of the code mattered because it served human purposes. The efficiency of the system mattered because it helped people accomplish what they needed to do.

But Agile, for all its talk of ‘customer collaboration’, actually moved developers further away from understanding and serving genuine human needs. Instead of asking ‘How can I solve problems that matter to people?’ they learned to ask ‘How can I optimise our sprint velocity?’

The ceremonies didn’t just waste their technical talents—they broke the vital connection between technical work and human purpose. Forty million brilliant minds didn’t just lose the ability to advance computing—they lost sight of why advancing computing would matter in the first place.

That drive to serve others through code is still there. But Agile channelled it into perfecting processes that prevent developers from ever connecting deeply with the human problems their skills could solve.

The Path Back to Impact

For developers stuck in this system: Your talents aren’t wasted because you’re bad at Agile. They’re wasted because Agile wastes talent by diverting the connection between your technical skills and the human problems you wanted to solve. That drive you had to make a difference in people’s lives? It’s still valid. The problems you wanted to solve? They still need solving.

But they won’t be solved in sprint planning meetings. They won’t be solved by better retrospectives. They’ll be solved by reconnecting with the human purposes that drew you to development in the first place—using your skills to genuinely serve people’s needs.

For organisations: Stop measuring process adherence and start measuring actual human impact. Judge teams by how well they solve real problems for real people, not how they execute ceremonies. Invest in deep understanding of human needs instead of collaborative optimisation.

For the industry: The next breakthrough that truly matters won’t come from a perfectly facilitated stand-up. It’ll come from someone who deeply understands a human problem worth solving and has the time and space to pursue solutions that actually matter.

The Bitter Truth

Forty million people wanted to make a difference through software. We gave them a system that redirects their energy into processes that make no measurable difference. We took their passion for impact and channelled it into perfecting ceremonies that, after 23 years, still produce no meaningful improvement to software development outcomes.

The advances in computing that could have emerged from those minds—the tools, the techniques, the innovations that could have transformed how software works—we’ll never know what we missed. That potential is gone forever. And the future looks just as bleak.

But we can choose differently now. We can redirect talent towards work that actually matters. We can build systems based on human insight rather than consensus optimisation.

The question is whether we will.

Further Reading

Note: The author invites readers to suggest additional sources that examine the effectiveness and impact of Agile practices on both software development outcomes and human needs. Many studies in this area compare Agile to Waterfall rather than examining whether Agile improved software development compared to e.g. pre-framework approaches.

Beck, K., Beedle, M., van Bennekum, A., Cockburn, A., Cunningham, W., Fowler, M., … & Thomas, D. (2001). Manifesto for Agile Software Development. Agile Alliance. https://agilemanifesto.org/

Brooks, F. P. (1975). The Mythical Man-Month: Essays on Software Engineering. Addison-Wesley.

DeMarco, T., & Lister, T. (2013). Peopleware: Productive Projects and Teams (3rd ed.). Addison-Wesley.

Norman, D. A. (2013). The Design of Everyday Things: Revised and Expanded Edition. Basic Books.

The author has 53 years of software development experience and created a version of the approach that became known as Agile (more specifically, Jerid, now Javelin). He writes regularly about Agile’s ineffectiveness, albeit to little avail, but persists.

Wankered

Understanding and addressing developer exhaustion in the software industry

In software development, there’s a lot of talk about technical debt, scalability challenges, and code quality. But there’s another debt that’s rarely acknowledged: the human cost. When we are consistently pushed beyond our limits, when the pressure never lets up, when the complexity never stops growing—we become wankered. Completely and utterly exhausted.

This isn’t just about being tired after a long day. This is about the deep, bone-deep fatigue that comes from months or years of ridiculous practices, impossible deadlines, and the constant cognitive load of modern software development.

The Weight of Complexity

Mental Load Overflow

Modern software development isn’t just about writing code. We are system architects, database administrators, DevOps engineers, security specialists, team mates, user experience designers, and people—often all in the same day. The sheer cognitive overhead of keeping multiple complex systems in our minds simultaneously is exhausting.

Every API integration, every third-party service, every microservice adds to the mental model that we must maintain. Eventually, that mental model becomes too heavy to carry.

Context Switching Fatigue

Nothing burns us out faster than constant context switching. One moment we’re debugging a race condition in the payment service, the next we’re in a meeting about user interface changes, then we’re reviewing someone else’s pull request in a completely different part of the codebase.

Each switch requires mental energy to rebuild context, and that energy is finite. By the end of the day, we’re running on empty, struggling to focus on even simple tasks.

The Always-On Culture

Slack notifications at 9 PM. ‘Urgent’ emails on weekends. Production alerts that could technically wait until Monday but somehow never do. The boundary between work and life has dissolved, leaving us in a state of perpetual readiness that prevents true rest and recovery.

The Exhaustion Cycle

Sprint After Sprint

Agile development was supposed to make our work more sustainable, but too often it’s become an excuse for permanent emergency mode. Sprint planning becomes sprint cramming. Retrospectives identify problems that never get addressed because there’s always another sprint starting tomorrow.

The two-week rhythm that should provide structure instead becomes a hamster wheel, with each iteration bringing new pressure and new deadlines.

Technical Debt Burnout

Working with legacy systems day after day takes a psychological toll. When every simple change requires hours of archaeological work through undocumented code, when every bug fix introduces two new bugs, when the system fights back at every turn—the frustration compounds into exhaustion.

The Perfectionism Trap

Software development attracts people who care deeply about their craft. But in an environment where perfection is impossible and deadlines are non-negotiable, that conscientiousness becomes a burden. The gap between what we want to build and what we have time to build becomes a source of constant stress.

How Tired Brains Sabotage Productivity

The Neuroscience of Mental Fatigue

When we’re mentally exhausted, our brains don’t just feel tired—they actually function differently. The prefrontal cortex, responsible for executive functions like planning, decision-making, and working memory, becomes significantly impaired when we’re fatigued.

This isn’t a matter of willpower or motivation. Tired brains literally cannot process complex information as effectively. The neural pathways responsible for holding multiple concepts in working memory become less efficient. Pattern recognition—crucial for debugging and coding—deteriorates markedly.

Cognitive Load and Code Complexity

Software development requires managing enormous amounts of information simultaneously: variable states, function dependencies, user requirements, interpersonal relationships, system constraints, and potential edge cases. When our brains are operating at reduced capacity due to exhaustion, this cognitive juggling act becomes nearly impossible.

We make more logical errors when tired, miss obvious bugs, and struggle to see the bigger picture whilst handling implementation details. The intricate mental models required for complex software architecture simply cannot be maintained when our cognitive resources are depleted.

Decision Fatigue in Development

Every line of code involves decisions: variable names, function structure, error handling approaches, performance trade-offs. A fatigued brain defaults to the path of least resistance, often choosing quick fixes over robust solutions.

Research shows that as mental fatigue increases, decision quality decreases exponentially. This is why code written during crunch periods often requires extensive refactoring later—our tired brains simply couldn’t evaluate all the implications of each choice.

The Organisational Impact

Productivity Paradox

When we’re exhausted, we’re not just unhappy—we’re less effective. Decision fatigue leads to poor architectural choices. Mental exhaustion increases bugs and reduces code quality. The pressure to deliver faster often results in delivering slower, as technical shortcuts create more work down the line.

Knowledge Flight Risk

When experienced members of our teams burn out and leave, they take irreplaceable institutional knowledge with them. The cost of replacing a senior developer who knows our systems intimately is measured not just in recruitment and onboarding time, but in the months or years of context that walks out the door.

Innovation Drought

Exhausted teams don’t innovate. We survive. When all our mental energy goes towards keeping existing systems running, there’s nothing left for creative problem-solving, quality improvement, or advancing the way the work works.

Sustainable Practices

Realistic Planning

Account for the hidden work: debugging, documentation, code review, deployment issues. Stop treating best-case scenarios as project timelines.

Protect Deep Work

We need uninterrupted blocks of time to tackle complex problems. Open offices and constant communication tools are the enemy of thoughtful software development. Create spaces and times where deep work is possible. (And we’ll get precious little help with that from developers).

Embrace Incrementalism

Not everything needs to be perfect in version one. Not every feature needs to ship this quarter. Sometimes the most sustainable approach is to build well 80% of what’s wanted, rather than 100% of what’s wanted, poorly.

Technical Health Time

Just as athletes need recovery time, codebases need maintenance time. Build technical debt reduction into our planning. Make refactoring a first-class citizen alongside feature development.

Individual Strategies

Boundaries Are Not Optional

Learn to say no. Not to being helpful, not to solving problems, but to the assumption that every problem needs to be solved immediately by any one of us.

Energy Management

Recognise that mental energy is finite. Plan the most challenging work for when we’re mentally fresh. Use routine tasks as recovery time between periods of intense focus.

Continuous Learning vs. Learning Overwhelm

Stay curious, but be selective. We don’t need to learn every new framework or follow every technology trend. Choose learning opportunities that align with career goals and interests, not just industry hype.

Physical Foundation

Software development is intellectual work performed by physical beings. Sleep, exercise, and nutrition aren’t luxuries—they’re professional requirements. Our ability to think clearly depends on taking care of our bodies.

Recognising the Signs

Developer exhaustion doesn’t always look like dramatic burnout. Often it’s subtler:

  • Finding it harder to concentrate on complex problems
  • Feeling overwhelmed by tasks that used to be routine
  • Losing enthusiasm for learning new technologies
  • Increased irritability during code reviews or meetings
  • Physical symptoms: headaches, sleep problems, tension
  • Procrastinating on work that requires deep thinking
  • Feeling disconnected from the end users and purpose of our work

Moving Forward

The goal isn’t to eliminate tiredness from software development—complex cognitive work is inherently demanding. The goal is to make that work sustainable over the long term. (Good luck with that, BTW)

This means building organisations that value our wellbeing not as a nice-to-have, but as a prerequisite for building quality software. It means recognising that the most productive developer is often the one who knows when to stop working. Which in turn invites us to confer autonomy on developers.

Software development will always be challenging. The problems we solve are complex, the technologies evolve rapidly, and the stakes continue to rise. But that challenge can energise us, not exhaust us.

When we’re wankered—truly, deeply tired—we’re not serving our users, our teams, or ourselves well. The most sustainable thing we can do is acknowledge our limits and work within them.

Because the best code isn’t written by the developer who works the longest hours. It’s written by the developer who brings their full attention and energy to the problems that matter most.


If you’re feeling wankered, you’re not alone. This industry has a long way to go in creating sustainable working conditions, but change starts with honest conversations about what we’re experiencing.

Further Reading

Baumeister, R. F., & Tierney, J. (2011). Willpower: Rediscovering the greatest human strength. Penguin Books.

DeMarco, T., & Lister, T. (2013). Peopleware: Productive projects and teams (3rd ed.). Addison-Wesley Professional.

Fowler, M. (2019). Refactoring: Improving the design of existing code (2nd ed.). Addison-Wesley Professional.

Hunt, A., & Thomas, D. (2019). The pragmatic programmer: Your journey to mastery (20th anniversary ed.). Addison-Wesley Professional.

Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.

Maslach, C., & Leiter, M. P. (2016). The burnout challenge: Managing people’s relationships with their jobs. Harvard Business Review Press.

McConnell, S. (2006). Software estimation: Demystifying the black art. Microsoft Press.

Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63(2), 81-97.

Newport, C. (2016). Deep work: Rules for focused success in a distracted world. Grand Central Publishing.

Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), 257-285.

Winget, L. (2006). It’s called work for a reason: Your success is your own damn fault. Gotham Books.

A New Way of Looking at Software Development

From the transcript of Dr Casey Morgan’s controversial presentation at CodeCon 2025

The auditorium buzzed with anticipation as Dr Casey Morgan stepped up to the presentation platform. Around them, 500 of the world’s top developers had just finished the morning coffee break, many still discussing their current projects using their familiar AST toolchains—some clicking through visual node editors, others using drag-and-drop tree builders to show off recent work.

‘Thank you for joining me today,’ Casey began, gesturing to dismiss the tree structures that had been displaying on the main screen. ‘I’m here to propose something… unconventional. A fundamentally different way to think about code representation that I believe could offer some unique advantages.’

She paused, scanning the faces of developers who had grown up building programmes by directly assembling syntax trees—clicking to add nodes, dragging to restructure branches, using visual editors, commands and APIs.

‘I call it “textual programming”.’

A wave of puzzled murmurs rippled through the audience. In the front row, Marcus Chen, lead architect at Distributed Dynamics, frowned slightly.

The Unusual Proposal

Casey’s concept was unlike anything the programming community had encountered: instead of building programmes by manipulating AST structures through visual node editors and drag-and-drop interfaces, programmes could be represented as linear sequences of human-readable symbols.

‘Imagine,’ Casey said, projecting a strange sequence onto the main display:

function calculateFibonacci(n) {
    if (n <= 1) return n;
    return calculateFibonacci(n-1) + calculateFibonacci(n-2);
}

‘This linear representation would encode the same semantic meaning as our AST structures, but as a sequential stream of characters that developers would… type directly.’

The audience stared at the bizarre notation with growing amusement.

The Immediate Concerns

Sarah Kim, Senior AST Engineer at MindMeld Corp: ‘Dr Morgan, I’m struggling to understand the practical implementation. How would developers ensure structural integrity? When I use a visual node editor, I literally cannot create a malformed tree—the interface simply won’t allow invalid connections. But with this… character stream… what prevents someone from typing complete nonsense?’

Casey nodded. ‘That’s certainly a challenge. The system would need to constantly re-parse these character sequences and provide error feedback when the text doesn’t represent a valid tree structure.’

The audience shifted uncomfortably.

Marcus Chen: ‘Wait, you’re suggesting a system where the code could be in an invalid state? Where developers could accidentally break their programme just by typing the wrong character? That seems like a massive degradation from our current reliability.’

Casey: ‘I understand that sounds concerning, but consider this: what if the ability to work in temporarily invalid states actually enables more fluid thinking? Sometimes you need to break something before you can rebuild it better. Current tree editors force you to maintain validity at every step, which might constrain exploration. Interestingly, there were early experiments with syntax-directed programming environments in the 1980s that enforced similar structural constraints, and environments like Mjølner for the BETA language that provided more structure-aware development tools, but they never achieved the fluidity that our modern AST tools provide. Perhaps the pendulum swung too far towards structural rigidity, and text could offer a middle ground.’

Dr Wright: ‘But Casey, you’re mischaracterising our current tools. Modern AST editors do support invalid intermediate states—they make them visible and actionable in ways text never could. When I’m restructuring a complex tree, the IDE shows me exactly which nodes are problematic, suggests valid completions, and even maintains partial compilation contexts. I can experiment freely whilst getting real-time feedback about structural issues. Your text approach would lose all of that sophisticated error guidance and replace it with… what? Cryptic parser error messages? We’ve already solved the flexibility problem without sacrificing the safety and intelligence of our tools.’

Sarah Kim: ‘And think about what else you’d be throwing away! Our semantic-aware merge algorithms that automatically resolve conflicts at the meaning level, real-time type inference that shows you the implications of every change, automated dependency tracking, intelligent refactoring that understands program semantics—all of that would be impossible with linear character sequences. You’d be asking developers to manually track imports, manually resolve merge conflicts, and manually verify type safety. It’s like proposing we go back to manual memory management when we have garbage collection.’

Unknown developer: ‘Not to mention accessibility. Our structure-aware screen readers work beautifully with AST nodes, providing rich semantic information to visually impaired developers. Text files would force them back to listening to character-by-character descriptions of syntax symbols. And what about internationalisation? AST nodes work universally, but your text files would tie us to specific character encodings and syntactic conventions.’

Marcus: ‘The security implications alone are staggering. Text files could contain hidden Unicode characters, be corrupted by encoding issues, or have malicious content inserted between visible characters. Our AST verification systems prevent all of that. And the environmental cost—think about all the redundant parsing and recompilation. Text would waste enormous amounts of computing resources that our direct tree manipulation avoids entirely.’

Dr James Wright, Director of Innovation Institute: ‘I’m concerned about the cognitive overhead. When I’m building a complex algorithm, I can see the entire tree structure, drag nodes around, visualise the flow. How would anyone comprehend programme structure from a linear sequence of characters?’

Casey: ‘That’s where it gets interesting—linear text might reveal different patterns than tree visualisation. You might notice repetitive structures, common sequences, or algorithmic patterns that are harder to see when nodes are spatially distributed. The constraint of linearity could force a different kind of structural thinking.’

Casey remained calm, though the room’s scepticism was palpable. ‘The idea is that developers would develop familiarity with common textual patterns. They’d learn to “read” the structure from the character sequences.’

The Growing Scepticism

As the session continued, the questions became more pointed:

Rapid Prototyping: ‘Casey, you mentioned quick sketching, but I can prototype by dragging a few function nodes together and see exactly what I’m building as I construct it. Why would typing individual characters be faster than visual construction?’

Version Control: ‘Instead of tracking AST transformations, we could diff character sequences directly. Imagine seeing exactly which symbols changed between versions—a completely new form of change visualisation.’

Universal Accessibility: ‘Text could be manipulated with the most basic tools imaginable. No specialised AST tooling required—potentially opening programming to entirely new populations who never learned tree manipulation interfaces, command-line utilities, or visual node editors.’

Cognitive Revolution: ‘Linear representation might unlock different types of thinking. Whilst AST commands encourage procedural construction, text could promote holistic algorithmic visualisation—potentially revealing new problem-solving approaches.’

Sara Kumar, Independent Developer: ‘Casey, this is mind-blowing! But I’m struggling to visualise the workflow. How would developers navigate these linear sequences? Our current AST tooling is so diverse and powerful—whether through tree views, command pipelines, or node graphs—how would you achieve similar precision with text?’

Casey’s eyes lit up. ‘That’s where it gets really interesting. Navigation could be character-based, word-based, or even pattern-based. Imagine search systems that find textual patterns across codebases—no need for complex tree queries or specialised AST search interfaces!’

The Mounting Objections

The questions grew more challenging as the session progressed.

Dr Wright: ‘Let’s talk about collaboration. When my team works together, we can see each other manipulating the same tree in real-time, pointing to specific nodes, discussing structure visually. How would that work with linear text?’

Marcus: ‘And error prevention—our IDEs guide us through valid tree construction. They suggest appropriate node types, validate connections, prevent impossible structures. Text systems would need to replicate all of that functionality whilst being fundamentally less intuitive.’

Sarah Kim: ‘Plus, there’s the execution efficiency issue. When I modify a node in my tree, the running programme updates instantly—real-time incremental compilation means our executables are always synchronised with the current AST state. With text, you’d need to reparse and recompile every time you make a change. That seems incredibly inefficient.’

Dr Wright: ‘And consider something as basic as cut and paste. When I copy an AST fragment, I’m copying a complete, semantically valid tree structure with all its type information and metadata. The IDE ensures I can only paste it in locations where it makes sense. With text, you’d be copying… character sequences? With no understanding of structure or validity? You could accidentally paste a function definition in the middle of an expression.’

Unknown developer: ‘But Casey, consider the sheer inefficiency. When I create that fibonacci function, I click “add function node”, type “calculateFibonacci”, set the parameter “n”, drag in a conditional, and set the values. With your text system, developers would have to manually type “function”, all the braces, “if”, “return”, parentheses, semicolons—why type all that structural syntax when the interface can handle it automatically?’

Casey: ‘Well, the redundancy does seem excessive when you put it that way…’

Sarah Kim: ‘The debugging implications are staggering. When something goes wrong, I can visually trace through my tree, see the data flow, identify problem nodes. You’re proposing we debug by… reading character sequences?’

Unknown voice from the back: ‘Dr Morgan, this feels like proposing assembly language when we have high-level visual programming tools. What’s the actual benefit?’

The Fundamental Questions

As the hour progressed, the audience’s concerns crystallised around core issues:

Safety: Text-based programming would introduce countless opportunities for errors that were literally impossible with guided tree construction.

Productivity: Every task that was currently visual and intuitive would become abstract and error-prone.

Learning Curve: New developers would need to memorise syntax rules instead of learning through visual exploration.

Tool Complexity: Text editors would need to recreate all the intelligence of current AST tools whilst being fundamentally less capable.

Maintenance: Reading and understanding existing code would become dramatically more difficult without visual tree representation.

Sara Kumar, Independent Developer: ‘Casey, I have to ask—have you actually tried building a complex system this way? It sounds like you’d spend more time debugging syntax errors than solving actual problems.’

Casey smiled weakly. ‘The learning curve would certainly be steep initially.’

The Uncomfortable Reality

Towards the end of the session, the questions became more direct.

Dr Maria Santos, Education Director at Code Academy: ‘Casey, we teach programming through visual tree building because it’s intuitive—students can see programme structure immediately. You’re proposing we replace that with… memorising character sequences? How would that possibly be better for learning?’

Casey: ‘I wonder if visual-first education might actually be limiting in some ways. When students start with trees, they think in terms of discrete components. Linear text might encourage them to think about flow, narrative, the sequential logic of computation. Different mental models could lead to different insights.’

Several audience members shook their heads in disbelief.

The Uncomfortable Questions

Towards the end of the session, the questions became more philosophical.

Marcus: ‘Casey, I need to understand your thesis here. You’ve shown us a system that would make programming more error-prone, harder to visualise, more difficult to debug, and require extensive memorisation of arbitrary syntax rules. You’re asking us to give up immediate visual feedback for… what exactly?’

Sarah Kim: ‘Every advantage you’ve mentioned—rapid prototyping, version control, collaboration—we already have superior solutions for. Our visual systems are faster, safer, and more intuitive. I genuinely don’t understand the appeal of this approach.’

Dr Wright: ‘And the security implications worry me. Our current tree validators ensure code integrity. Text files could be easily corrupted or maliciously modified. How would text-based systems prevent tampering?’

Unknown developer: ‘Dr Morgan, with respect, this sounds like a needlessly complex solution to problems we’ve already solved. Why would anyone choose to make programming harder?’

The Final Challenge

As the session neared its end, Marcus Chen stood up with a bemused expression.

‘Casey, I want to understand something. You’ve proposed replacing our visual, guided, error-preventing development environment with a system based on memorising syntax rules and typing linear character sequences. A system where malformed programmes are possible, where structure is invisible, where collaboration becomes awkward text sharing.

‘I’m trying to find the upside here, but every supposed benefit seems to be something we already do better with visual tree manipulation. The downsides, however, are enormous: syntax errors, reduced productivity, harder debugging, steeper learning curves, and cognitive overhead.

‘So my question is simple: other than academic curiosity, why would any rational developer choose this approach?’

Casey looked out at the audience—hundreds of developers who could shape logic with drag-and-drop simplicity, who collaborated through shared visual workspaces, who had never known the frustration of syntax errors or the cognitive load of maintaining mental models of invisible structure.

‘Perhaps,’ they said quietly, ‘there are insights that only come from constraint. Maybe working with a more limited medium forces different kinds of thinking. Or maybe…’

They paused, seeing the politely sceptical faces.

‘Maybe you’re right. Maybe this is just an interesting academic curiosity with no practical value.’

Epilogue

Dr Morgan’s presentation ended to scant and unconvinced murmurs. Whilst their research into ‘textual programming’ generated some academic discussion amongst theoretical computer scientists, the broader development community found the proposal risible.

A few independent researchers built experimental text editors and basic parsers, mostly to satisfy their curiosity about this unusual approach. Most found the experience frustrating and unproductive—exactly as the CodeCon audience had predicted.

The general consensus was that Dr Morgan had demonstrated an interesting thought experiment about alternative representations, but nothing that could compete with the efficiency, safety, and intuitiveness of direct tree manipulation.

Whether textual programming represented a misguided approach or simply an academic exercise remained unclear. What was certain was that the development community saw no compelling reason to abandon their sophisticated, visual, error-preventing tools for the apparent chaos of linear character sequences.

The revolution, it seemed, would have to wait for more compelling advantages.


Dr Casey Morgan continues their research into alternative programming paradigms at the Institute for Computational Archaeology. Their upcoming paper, ‘Linear Text as Code Representation: A Feasibility Study’, is expected to conclude that whilst technically possible, textual programming offers no significant advantages over current tree-based development methodologies.

Further Reading

Baxter, I. D., Yahin, A., Moura, L., Sant’Anna, M., & Bier, L. (1998). Clone detection using abstract syntax trees. In Proceedings of the International Conference on Software Maintenance (pp. 368-377). IEEE.

Fluri, B., Würsch, M., PInzger, M., & Gall, H. C. (2007). Change distilling: Tree differencing for fine-grained source code change extraction. IEEE Transactions on Software Engineering, 33(11), 725-743. https://doi.org/10.1109/TSE.2007.70731

Kay, A. (1993). The early history of Smalltalk. ACM SIGPLAN Notices, 28(3), 69-95. https://doi.org/10.1145/155360.155364

Klint, P., van der Storm, T., & Vinju, J. (2009). RASCAL: A domain specific language for source code analysis and manipulation. In Proceedings of the 9th IEEE International Working Conference on Source Code Analysis and Manipulation (pp. 168-177). IEEE.

Madsen, O. L., Møller-Pedersen, B., & Nygaard, K. (1993). Object-oriented programming in the BETA programming language. Addison-Wesley.

Teitelbaum, T., & Reps, T. (1981). The Cornell program synthesizer: A syntax-directed programming environment. Communications of the ACM, 24(9), 563-573. https://doi.org/10.1145/358746.358755

The Secret Career Advantage Most Developers Ignore

Why understanding foundational principles could be your biggest competitive edge

Whilst most developers chase the latest frameworks and cloud certifications, there’s a massive career opportunity hiding in plain sight: foundational knowledge that 90% of your peers will never touch.

The developers who understand systems thinking, team dynamics, and organisational behaviour don’t just write better code—they get promoted faster, lead more successful projects, and become indispensable to their organisations. Here’s why this knowledge is your secret weapon.

The Opportunity Gap Is Massive

Walk into any tech company and you’ll find dozens of developers who can implement complex algorithms or deploy microservices. But try to find someone who understands why projects fail, how teams actually work, or how to think systematically about performance bottlenecks. You’ll come up empty.

This creates an enormous opportunity. When everyone else is fighting over who knows React best, you can differentiate yourself by understanding why most React projects fail. Whilst others memorise API documentation, you can diagnose the organisational problems that actually slow teams down.

The knowledge gap is so wide that basic competency in these areas makes you look like a genius.

You’ll Solve the Right Problems

Most developers optimise locally—they’ll spend weeks making their code 10% faster whilst completely missing that the real bottleneck is a manual approval process that batches work for days. Understanding systems thinking (Deming, Goldratt, Ackoff) means you’ll focus on the constraints that actually matter.

I’ve watched developers become heroes simply by identifying that the ‘performance problem’ wasn’t in the database—it was in the workflow. Whilst everyone else was arguing about indices, they traced the real issue to organisational design. Guess who got the promotion?

When you understand flow, variation, and constraints, you don’t just fix symptoms—you solve root causes. This makes you dramatically more valuable than developers who can only optimise code.

You’ll Predict Project Outcomes

Read The Mythical Man-Month, Peopleware, and The Design of Everyday Things, and something magical happens: you develop pattern recognition for project failure. You’ll spot the warning signs months before they become disasters.

Whilst your peers are surprised when adding more developers makes the project slower, you’ll know why Brooks’ Law kicks in. When others are confused why the ‘obviously superior’ technical solution gets rejected, you’ll understand the human and organisational factors at play.

This predictive ability makes you invaluable for planning and risk management. CTOs love developers who can spot problems early instead of just reacting to crises.

You’ll Communicate Up the Stack

Most developers struggle to translate technical concerns into business language. They’ll say ‘the code is getting complex’ when they should say ‘our development velocity will decrease by 40% over the next six months without refactoring investment’.

Understanding how organisations work—Drucker’s insights on knowledge work, Conway’s Law, how incentive systems drive behaviour—gives you the vocabulary to communicate with executives. You’ll frame technical decisions in terms of business outcomes.

This communication ability is rocket fuel for career advancement. Developers who can bridge technical and business concerns become natural candidates for technical leadership roles.

You’ll Design Better Systems

Christopher Alexander’s Notes on the Synthesis of Form isn’t just about architecture—it’s about how complex systems emerge and evolve. Understanding these principles makes you better at software architecture, API design, and system design interviews.

You’ll build systems that work with human organisations instead of against them. You’ll design APIs that developers actually want to use. You’ll create architectures that can evolve over time instead of calcifying.

Whilst other developers create technically impressive systems that fail in practice, yours will succeed because they account for how humans and organisations actually behave.

You’ll Avoid Career-Limiting Mistakes

Reading Peopleware could save your career. Understanding that software problems are usually people problems means you won’t waste months on technical solutions to organisational issues. You won’t join dysfunctional teams thinking you can fix them with better code.

You’ll recognise toxic work environments early and avoid getting trapped in death-march projects. You’ll understand which technical initiatives are likely to succeed and which are doomed by organisational realities.

This knowledge acts like career insurance—you’ll make better decisions about which companies to join, which projects to take on, and which battles to fight.

The Learning Investment Pays Exponentially

Here’s the beautiful part: whilst everyone else is constantly relearning new frameworks, foundational knowledge compounds. Understanding team dynamics is just as valuable in 2025 as it was in 1985. Systems thinking principles apply regardless of whether you’re building web apps or AI systems.

Spend 40 hours reading Peopleware, The Mythical Man-Month, and learning about constraints theory, and you’ll use that knowledge for decades. Compare that to spending 40 hours learning the latest JavaScript framework that might be obsolete in two years.

The ROI on foundational knowledge is massive, but almost no one invests in it.

The Joy of True Mastery

There’s something else most developers miss: the intrinsic satisfaction of developing real mastery. Pink (2009) identified mastery as one of the core human motivators—the deep pleasure that comes from getting genuinely better at something meaningful.

Learning React hooks gives you a brief dopamine hit, but it’s shallow satisfaction. You’re not mastering anything fundamental—you’re just memorising another API that will change next year. There’s no lasting sense of growth or understanding.

But learning to think systematically about complex problems? Understanding how teams and organisations actually function? Grasping the deep principles behind why some software succeeds and others fail? That’s true mastery. It changes how you see everything.

You’ll find yourself analysing problems differently, spotting patterns everywhere, making connections between seemingly unrelated domains. The knowledge becomes part of how you think, not just what you know. This kind of learning is intrinsically rewarding in a way that framework tutorials never are.

How to Build This Advantage

Start with the classics:

  • The Mythical Man-Month – Brooks (1995)
  • Peopleware – DeMarco & Lister (2013)
  • The Design of Everyday Things – Norman (2013)
  • Notes on the Synthesis of Form – Alexander (1964)
  • The Goal – Goldratt & Cox (2004)
  • The Effective Executive – Drucker (2007)

Apply immediately:

Don’t just read—look for these patterns in your current work. Practise diagnosing organisational problems, identifying constraints, predicting project outcomes.

Share your insights:

This isn’t about positioning yourself or impressing managers—it’s about thinking aloud, finding likeminded peers, and building mental muscle memory. Writing and teaching helps to articulate fuzzy understanding into clear principles, which deepens your grasp of the material.

Write to clarify your own thinking. When you read about Conway’s Law, don’t just nod along—write about how you’ve seen it play out in your own teams. Trying to explain why your microservices architecture mirrors your organisational structure forces you to really understand the principle. The act of writing reveals gaps in your understanding and solidifies genuine insights.

Teach to expose what you don’t know. Explaining systems thinking to a colleague immediately shows you which parts you actually understand versus which parts you’ve just memorised. Teaching helps to develop intuitive explanations, real-world examples, and practical applications. You’ll often discover you understand concepts less well than you thought.

Build pattern recognition through articulation. Each time you write about a problem through the lens of Peopleware or analyse a workflow using Theory of Constraints, you’re training your brain to automatically apply these frameworks. Writing about the patterns makes them become more like second nature—mental muscle memory that kicks in when you encounter similar situations.

Create your own case studies. Document your experiences applying these principles. “How I used Goldratt’s Theory of Constraints to diagnose our deployment bottleneck” isn’t just content for others—it’s also cognitive practice. You’re building a library of patterns that your brain can reference automatically.

Think through problems publicly. Whether it’s a blog post, internal wiki, or even just detailed notes, working through organisational problems using foundational frameworks trains your mind to see systems, constraints, and human factors automatically. The more you practise applying these lenses, the more natural they become.

The goal is developing intuitive expertise—reaching the point where you automatically think about team dynamics when planning projects, or instinctively spot organisational dysfunction. This cognitive muscle memory is what separates developers who’ve read the books from those who’ve internalised the principles.

Connect the dots:

Use this knowledge to explain why projects succeed or fail. Make predictions. Build ability and credibility as someone who understands the bigger picture.

The Secret Is Out

The tragedy of developer education is that we’re taught to optimise for looking productive whilst systematically avoiding the knowledge that would make us actually productive. Organisations reward visible coding whilst discouraging the learning that would prevent project failures.

But this creates opportunity. Whilst everyone else chases the same technical skills, you can build knowledge that’s both more valuable and more durable.

The secret career advantage isn’t learning the latest framework—it’s understanding the timeless principles that determine whether software projects succeed or fail.

Most developers will never figure this out. But now you know.

Ready to build your secret advantage? Pick one foundational book, or even just a precis or summary, and start reading today. Your future self will thank you.

Further Reading

Ackoff, R. L. (1999). Ackoff’s best: His classic writings on management. John Wiley & Sons.

Alexander, C. (1964). Notes on the synthesis of form. Harvard University Press.

Brooks, F. P. (1995). The mythical man-month: Essays on software engineering (Anniversary ed.). Addison-Wesley Professional.

Conway, M. E. (1968). How do committees invent? Datamation, 14(4), 28-31.

DeMarco, T., & Lister, T. (2013). Peopleware: Productive projects and teams (3rd ed.). Addison-Wesley Professional.

Deming, W. E. (2000). Out of the crisis. MIT Press. (Original work published 1986)

Deming, W. E. (2000). Out of the crisis. MIT Press. (Original work published 1986)

Drucker, P. F. (2007). The effective executive: The definitive guide to getting the right things done. Butterworth-Heinemann. (Original work published 1967)

Goldratt, E. M., & Cox, J. (2004). The goal: A process of ongoing improvement (3rd rev. ed.). North River Press.

Marshall, R. W. (2021). Quintessence: An acme for software development organisations. Leanpub.

Norman, D. A. (2013). The design of everyday things (Revised and expanded ed.). Basic Books.

Pink, D. H. (2009). Drive: The surprising truth about what motivates us. Riverhead Books.

Seddon, J. (2008). Systems thinking in the public sector: The failure of the reform regime… and a manifesto for a better way. Triarchy Press.

Senge, P. M. (2006). The fifth discipline: The art and practice of the learning organisation (Revised ed.). Random House Business Books.

Tribus, M. (1992). The germ theory of management. SPC Press.

A Conversation About John Gall

Image

Yesterday, I found myself in a fascinating conversation with a group of software developers who seemed genuinely troubled by something they’d recently encountered: the writings of John Gall. But troubled in the sense of disagreement—they weren’t convinced by his profound observations about complex systems. Their pushback was immediate and visceral, and it brought back memories of my own encounter with Gall’s ideas—and with the man himself.

I had the remarkable privilege of meeting John Gall at a Gilbfest some years ago, shortly before his passing. Speaking with him in person added layers to his written insights that I’m still unpacking.

Who Was John Gall?

John Gall was a paediatrician and leading systems theorist who wrote Systemantics in 1975 (later republished as The Systems Bible). Whilst not a technologist, his observations about how complex systems behave have proven remarkably prescient in our digital age. His most famous principle, known as Gall’s Law, states:

‘A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.’

Meeting the Man Behind the Ideas

What struck me most about meeting John Gall in person was how his demeanour perfectly embodied his systems thinking. He had the quiet calm of someone who had spent decades observing patterns others missed, combined with an almost mischievous delight in pointing out the gap between how we think systems should work and how they actually do.

In conversation, he was remarkably humble about his insights—which somehow made them more powerful. He didn’t present his observations as revolutionary discoveries but as simple truths that were hiding in plain sight. It was this quality that made his ideas so compelling and, I now realise, so disturbing to practitioners who encounter them.

There was something almost subversive about how he discussed complex systems. Not in a destructive way, but in the sense that he was gently undermining assumptions we didn’t even know we held. Talking with him felt like having someone point out that the emperor’s new clothes were, indeed, invisible—but doing so with such kindness that you couldn’t help but laugh at your own blindness.

Missing the Lineage

I’m hardly surprised that developers balk at John Gall’s insights. Developers seem woefully ignorant of antecedents—Deming, Ackoff, Goldratt, Seddon, Capers Jones, and others who spent decades studying how complex systems actually behave.

Gall wasn’t working in isolation. He was part of a tradition of people who looked at systems—manufacturing systems, organizational systems, quality systems—and noticed patterns. The software industry acts like it invented complexity, but these insights about how systems fail and succeed go back generations.

When you don’t know the lineage, Gall’s observations can seem like random provocations rather than hard-won wisdom about the nature of complex systems.

The Conversation

He was exactly what you’d expect from someone who spent decades watching systems behave in ways their creators never intended. Quietly amused. Not trying to convince anyone of anything. Just sharing what he’d noticed.

The developers had plenty of counterarguments. Successful complex designs. Modern tools that make planning work better. Their own experience building systems that succeeded because of careful architecture.

But then one of them started telling a story about a ‘temporary’ script that had become the backbone of their production system. Another mentioned the beautiful enterprise architecture that never quite worked as designed. A third talked about the quick prototype that somehow got scaled to millions of users.

We’ve all been there.

Gall wasn’t anti-planning or anti-design. He was just honest about what he observed. Complex systems that work have histories. They started somewhere simpler. They grew. They adapted. The ones designed as complete, complex systems from day one… well, there aren’t many success stories there. I can vouch.

Unix started simple. The web started simple. Git started with Linus Torvalds scratching an itch.

Even our engineering principles acknowledge this. KISS exists because complexity kills systems.

The developers weren’t wrong about their successes. But their successes might tell a different story than they think. What if their well-planned complex systems succeeded not because of the planning, but because they were good at adapting when reality differed from the plan? What if their individual wins don’t contradict the broader pattern?

I’m not trying to convince anyone. Gall’s insights either match what you’ve seen or they don’t.

But it’s worth asking: when you look at the systems that actually endured in your career—the ones still running years later—how many started complex? How many started simple and grew?

The pattern is there if you want to see it.

Or not. Systems will keep teaching us, either way.

Further Reading

Gall, J. (2002). The systems bible: The beginner’s guide to systems large and small (3rd ed.). General Systemantics Press. (Original work published 1975)

Alexander, C. (1977). A pattern language: Towns, buildings, construction. Oxford University Press.

Brooks, F. P. (1995). The mythical man-month: Essays on software engineering (Anniversary ed.). Addison-Wesley. (Original work published 1975)

Constantine, L. L. (1995). Constantine on peopleware. Yourdon Press.

DeMarco, T., & Lister, T. (2013). Peopleware: Productive projects and teams (3rd ed.). Addison-Wesley. (Original work published 1987)

Raymond, E. S. (1999). The cathedral and the bazaar: Musings on Linux and open source by an accidental revolutionary. O’Reilly Media.

Weinberg, G. M. (2001). An introduction to general systems thinking (Silver anniversary ed.). Dorset House. (Original work published 1975)

The Certainty Trap: How Cultures Construct Absolute Truth in a World Where None Exists

Here’s the fundamental paradox of human existence: we desperately need definitive answers in a reality where no such answers could ever exist. We crave certainty about ultimate questions—the nature of reality, the purpose of existence, the right way to live—but we inhabit a world where objectivity is not just difficult to achieve but conceptually impossible. Knowledge requires a knower, and knowers are always situated somewhere, with particular perspectives that shape everything they can understand.

This creates one of the most fascinating phenomena in human culture: entire societies that construct elaborate systems for generating absolute certainty about questions that have no absolute answers. In cultures steeped in religious tradition, you’ll encounter something remarkable: people who speak with unwavering conviction about ultimate truths. They don’t hedge, qualify, or express doubt about moral reality, divine purpose, or the fundamental structure of existence. They know—with the kind of definitiveness that makes secular observers squirm—exactly what is true and what is false.

But here’s what makes this phenomenon so philosophically unsettling: there is no objective truth for them to be certain about. No view from nowhere. No neutral ground. No ultimate perspective that transcends human situatedness. The definitive answers that these cultures provide with such confidence are constructions—sophisticated, socially reinforced, emotionally satisfying constructions—but constructions nonetheless. They’re understandable responses to a genuinely impossible situation—the human need for definitive answers in a reality where no such answers could ever exist.

The Architecture of Absolute Conviction

Religious societies don’t just stumble into certainty. They build elaborate systems to generate and maintain it. Sacred texts become unquestionable sources of truth rather than historical documents written by particular people in particular contexts. Theological interpretations crystallise into doctrine. Community practices reinforce shared perspectives until they feel like natural facts rather than cultural agreements.

But you don’t need traditional religion to see this pattern. Consider the Agile software development community—a thoroughly secular, technical culture that exhibits all the hallmarks of faith-based certainty. They have their sacred text (the Agile Manifesto), their prophets (Kent Beck, Martin Fowler), their orthodoxies and heresies, their ritualistic practices (daily standups, retrospectives, sprint planning), and most importantly, their unshakeable conviction that they’ve discovered the fundamentally one true way to build software.

Watch how this works in practice. An Agile advocate doesn’t say ‘our particular approach to software development, based on our interpretation of certain principles within our cultural context, seems to work well for many types of projects.’ They say ‘Agile is how software should be built.’ The perspectival nature of their knowledge—the fact that it emerged from specific people solving specific problems in specific contexts—gets erased through confident proclamation about universal principles.

This isn’t accidental. Religious systems are remarkably effective at transforming situated, culturally constructed viewpoints into what feel like universal, eternal truths. The community member experiences their beliefs not as one possible way of organising reality amongst many, but as reality itself.

The Social Construction of the Sacred

What makes religious certainty so powerful is precisely what makes it so philosophically problematic: it’s deeply social. When your family, neighbours, spiritual leaders, and intellectual authorities all share the same fundamental picture of reality—when that picture gets reinforced daily through ritual, story, and communal practice—it gains a solidity that individual reflection could never achieve.

The Agile community demonstrates how this works in secular contexts. Attend any Agile conference, join any Scrum team, participate in any DevOps initiative, and you’ll find yourself immersed in a culture where certain beliefs are simply givens. Everyone knows that cross-functional teams are better than specialised roles. Everyone agrees that working software matters more than comprehensive documentation. Everyone understands that responding to change trumps following a plan. These aren’t treated as contextual preferences or cultural choices—they’re treated as discovered truths about the nature of effective software development.

But this social reinforcement doesn’t make the beliefs more objectively true. It just makes them feel more true. The Agile practitioner’s confidence comes not from having transcended perspective but from being so thoroughly embedded in a particular perspective that alternatives become literally unthinkable. When your entire professional network speaks the same language, attends the same conferences, reads the same thought leaders, and practises the same rituals, dissenting views don’t just seem wrong—they seem apostatic.

Consider how religious communities handle doubt or alternative viewpoints. They’re not typically engaged as legitimate challenges to explore but as temptations to resist, errors to correct, or signs of spiritual weakness. The system is designed to maintain certainty, not to test it against the possibility that no ultimate certainty exists.

The Agile community exhibits identical patterns. Suggest that some projects might benefit from more upfront planning and comprehensive documentation, and you’ll be met not with curious inquiry but with correction about why you ‘don’t understand’ Agile principles. Point out that specialised roles might be more effective than cross-functional teams for certain types of complex work, and you’ll be dismissed as having ‘waterfall thinking.’ Question whether two-week sprints are optimal for all types of development, and you’ll be told you’re ‘not doing Agile right.’ The community has developed sophisticated mechanisms for deflecting challenges to its core certainties whilst maintaining the illusion of being empirically driven and pragmatic.

The Neuroscience of Constructed Truth

Modern brain research reveals just how thoroughly our minds construct rather than simply receive reality. We don’t perceive the world and then add interpretation—perception itself is interpretation, shaped by expectations, prior beliefs, and cultural training. Our brains are constantly filling in gaps, making predictions, and filtering information according to existing frameworks.

Religious cultures provide incredibly powerful frameworks for this interpretive process. They offer comprehensive stories about the nature of reality, clear categories for organising experience, and strong emotional investments in particular ways of seeing. These frameworks become so fundamental to how believers process information that contradictory evidence gets filtered out, reinterpreted, or simply not perceived at all.

The result is experiential certainty about truths that exist only within the interpretive system that generates them. The believer doesn’t feel like they’re constructing truth—they feel like they’re discovering it. But the ‘discovery’ is actually the successful operation of a meaning-making system that transforms cultural artefacts into felt reality.

The Paradox of Revealed Truth

Religious systems solve the problem of epistemic uncertainty through claims to revealed truth. God, they assert, has provided direct access to ultimate reality through scripture, prophecy, or mystical experience. This revelation supposedly transcends the limitations of human perspective, offering the view from somewhere that secular knowledge cannot achieve.

The Agile community has created its own version of revealed truth through the Agile Manifesto—a document that’s treated not as one group’s opinions about software development circa 2001, but as a timeless discovery of fundamental principles. The seventeen signatories aren’t presented as particular individuals with particular backgrounds solving particular problems in particular contexts. They’re treated as visionaries who uncovered universal truths about how software should be built. How lame is that?

But both religious revelation and Agile principles come through thoroughly human channels—particular people, in particular cultural contexts, with particular assumptions, beliefs and interests. Even if we granted that the Agile founders had genuine insights about software development, we’d still be left with thoroughly human processes of interpreting and applying those insights across radically different contexts, organisations, and problem domains.

Agile communities hardly ever acknowledge this. Instead, they treat their particular interpretations of the manifesto as having the same authority as the original principles themselves. A Scrum Master’s understanding of ‘individuals and interactions over processes and tools’ becomes indistinguishable from what the principle was conceived to mean. The interpretive community’s practices become the authentic expression of Agile truth itself.

The Comfort of False Certainty

Why do religious cultures cling so tenaciously to definitive answers when no such answers actually exist? Because uncertainty is existentially difficult. The human condition involves navigating fundamental questions about meaning, morality, and purpose without access to ultimate truth about any of them. We’re thrown into existence, forced to make choices, compelled to find meaning, all whilst standing on epistemological quicksand.

The same psychological need operates in professional contexts. Software development is inherently uncertain—complex problems, changing requirements, unpredictable technical challenges, human coordination difficulties. The Agile community offers firm methodological ground where none actually exists. They provide clear answers to unanswerable questions about the ‘right’ way to organise teams, plan projects, and deliver software. The psychological relief this provides is enormous—so enormous that practitioners often can’t imagine giving it up, even when presented with compelling evidence that Agile practices don’t work well in their specific context.

There’s a deeper dynamic at work here, one that science fiction captured perfectly in the later Stargate series. The Ori gained literal power from the belief of their followers—the more people believed in Origin, the more powerful the Ori became. Whilst this is fiction, it points to a real phenomenon: belief systems gain tremendous social and psychological power precisely through the intensity of conviction they generate. The certainty itself becomes the source of the system’s authority, independent of whether its foundational claims correspond to any notional reality.

This isn’t intellectual dishonesty so much as human necessity. Most people cannot work comfortably with the full implications of methodological uncertainty. Agile culture provides elaborate mechanisms for avoiding that discomfort through the construction of false but emotionally sustainable certainties about software development best practices.

The Price of Constructed Truth

The problem with building identity and community around definitive truths that don’t actually exist is rigidity. When your fundamental understanding of reality depends on maintaining particular beliefs, those beliefs become non-negotiable. Alternative perspectives aren’t just different—they’re threatening to the entire system of meaning that makes life livable.

This creates the characteristic inflexibility that secular observers find so frustrating in religious discourse. It’s not that religious believers are naturally more dogmatic than other people. It’s that their entire framework for understanding reality depends on treating constructed certainties as ultimate truths. Acknowledging the perspectival, contingent nature of their beliefs would undermine the very certainty that makes the beliefs psychologically valuable.

Religious cultures often respond to challenges by doubling down rather than engaging seriously with alternatives. This makes perfect sense within their own logic—if you’ve organised your entire worldview around the premise that certain truths are absolute and eternal, then treating them as open questions becomes impossible.

The Impossibility Runs Deeper

Could there be any conceivable world where objectivity actually exists? Perhaps a reality where minds work like perfect recording devices rather than active interpreters? But even then, someone would need to decide what to record, how to organise the recordings, and what counts as relevant—which reintroduces perspective.

Maybe a world where all conscious beings share identical conceptual frameworks, languages, and ways of organising experience? But this just pushes the problem back one level. Why these particular shared categories rather than others? The choice of universal framework would itself reflect a particular perspective.

What about some form of mystical direct access that bypasses all interpretation? ‘Direct access’ still requires someone to have the access, and that someone would need ways of understanding and communicating what they’ve accessed—which brings us back to perspective and interpretation.

Even the fantasy of accessing a God’s-eye view doesn’t solve the problem. An omniscient divine perspective would still be a perspective. An infinite being would still have to choose what to attend to, how to organise infinite information, what to consider relevant. Those choices would reflect the particular nature of that divine consciousness.

The deeper truth is that objectivity isn’t just contingently impossible in our world—it’s conceptually impossible in any world with conscious beings. Knowledge requires a knower, and knowers are always situated somewhere, with particular capacities, assumptions, beliefs, interests, and ways of organising experience. The very concept of ‘perspective-free knowledge’ is as oxymoronic as ‘married bachelor’—not just empirically unavailable but logically contradictory.

This means the certainty-construction systems we see in religious cultures, Agile communities, and countless other belief systems aren’t flawed responses to a solvable problem. They’re understandable responses to a genuinely impossible situation—the human need for definitive answers in a reality where no such answers could ever exist.

What They’re Really Forgetting

The confidence they feel comes not from having transcended human limitations but from having found particularly effective ways of forgetting about them. But what exactly are these sophisticated forgetting systems helping people avoid confronting?

It’s fundamentally the groundlessness of everything. As the old saying goes, ‘It’s turtles all the way down.’

The reference is to a classic philosophical joke: a philosopher explains that the world rests on the back of a giant turtle. When asked what the turtle stands on, he replies ‘another turtle.’ And what does that turtle stand on? ‘It’s turtles all the way down.’

This captures the infinite regress problem that makes human knowledge so psychologically difficult. Every belief, every foundation, every supposedly solid principle rests on other beliefs, other foundations, other principles—with no final turtle at the bottom holding it all up. When you ask what any system of knowledge ultimately rests on, you find it’s turtles all the way down.

Religious and ideological certainty-construction systems are elaborate ways of convincing people that their turtle is the bottom one. That their foundational texts, principles, or revelations aren’t just more turtles but actual bedrock. The Agile community does this by treating the Manifesto as discovered truth about software development rather than just another turtle—one group’s opinions from 2001 sitting on more turtles of their particular experiences, cultural context, and assumptions and beliefs about work and technology.

Religious communities do it by treating their scriptures as divine revelation rather than human documents sitting on more layers of human interpretation, translation, cultural transmission, and historical contingency.

The psychological genius of these systems is helping people stop looking down at the infinite turtle stack. They provide what feels like solid ground to stand on, when the actual situation is turtles all the way down. The confidence comes from successfully forgetting about the turtle stack underneath their particular turtle.

Living in the Gap

The tension between religious certainty and philosophical reality creates a fascinating cultural phenomenon: entire societies organised around definitive answers to questions that have no definitive answers. These societies produce people who can speak with absolute confidence about the nature of God, the purpose of existence, and the structure of moral reality, even though no such absolute knowledge is available to finite, situated, culturally embedded human beings.

This doesn’t make religious cultures less sophisticated than secular ones—secular cultures have their own ways of avoiding full confrontation with epistemic uncertainty. But it does reveal something important about human psychology: our profound need for certainty often overrides our capacity for acknowledging the limits of human knowledge.

Religious believers aren’t wrong to want definitive answers to ultimate questions. They’re wrong to think they have them. The certainty they experience is real, but it’s the certainty of successful meaning-construction, not the certainty of correspondence with objective truth. The confidence they feel comes not from having transcended human limitations but from having found particularly effective ways of forgetting about them.

In a world where objectivity is impossible and truth is always constructed from particular perspectives, both religious and secular communities represent the same strategy for dealing with this uncomfortable reality: build robust systems for generating false but livable certainties, then protect those certainties by treating them as immune to the very philosophical insights that reveal their constructed nature.

The Agile community perfectly illustrates this pattern. It’s created a comprehensive belief system around software development that provides definitive answers to inherently uncertain questions. It’s built social structures to reinforce these beliefs, developed rituals to embody them, and created mechanisms to deflect challenges to them. Most importantly, it’s convinced itself that its particular cultural artefacts represent discovered truths about the objective nature of effective software development.

It’s an understandable response to an impossible situation. But the Agile comminity is responding to something that doesn’t exist—absolute answers to questions that don’t have absolute answers. The definitive views that characterise both religious cultures and Agile communities aren’t discoveries about reality. They’re successful systems for burying uncertainty, mistaken for the certainty itself.

Further Reading

Beck, K., Beedle, M., van Bennekum, A., Cockburn, A., Cunningham, W., Fowler, M., … & Thomas, D. (2001). Manifesto for agile software development. Retrieved from https://agilemanifesto.org/

Feyerabend, P. (1975). Against method: Outline of an anarchistic theory of knowledge. New Left Books.

Nagel, T. (1986). The view from nowhere. Oxford University Press.

Pratchett, T. (1988). Small gods. Gollancz.

Related Works:

Berger, P. L., & Luckmann, T. (1966). The social construction of reality: A treatise in the sociology of knowledge. Anchor Books.

Kuhn, T. S. (1962). The structure of scientific revolutions. University of Chicago Press.

Rorty, R. (1979). Philosophy and the mirror of nature. Princeton University Press.

Wittgenstein, L. (1953). Philosophical investigations. Blackwell.

Coding Practices Are So the Wrong Focus

In W. Edwards Deming’s famous Red Bead experiment, willing workers try their best to draw only white beads from a bowl containing 80% white beads and 20% red beads. Using a paddle that scoops exactly 50 beads, workers are told to produce zero defects (no red beads). No matter how hard they try, how skilled they are, or how much they want to succeed, the random distribution means some workers will consistently get more red beads than others through pure chance. The system determines the outcome, not individual effort.

Deming used this experiment to demonstrate a fundamental truth: 95% of performance problems come from the system, not the individual workers. Yet in software development, we’ve created an entire industry obsessed with the equivalent of ‘worker performance improvement’—code reviews, linting rules, architectural purity, testing coverage—whilst ignoring the systems that actually determine product success.

The Software Industry’s Red Bead Problem

Walk into any tech company and you’ll find passionate debates about coding standards, architecture patterns, and development methodologies. Teams spend hours in code reviews, invest heavily in testing frameworks, and argue endlessly about the ‘right’ way to structure their applications.

Meanwhile, the same companies ship products nobody wants, struggle with unclear requirements, and watch competitors succeed with arguably inferior technical implementations.

We’ve created a culture where developers are evaluated on code quality metrics whilst remaining largely ignorant of whether their beautifully crafted code actually solves real problems for the Folks that Matter™. It’s the Red Bead experiment in action—we’re measuring and optimising individual performance whilst the system churns out failed products regardless of how elegant the codebase might be.

Most tellingly, in most organisations developers have next to zero influence over what really matters: what gets built, for whom, and why. They’re handed requirements from product managers, asked to estimate tasks defined by others, and measured on delivery speed and code quality—all whilst having no input on whether they’re building the right thing. Then they get blamed when products fail in the market.

The Invisible System

Most developers operate with a remarkably narrow view of the system they’re embedded in. They see their piece—the code, the sprint, maybe their immediate team—but remain blind to the larger forces that actually determine whether their work creates value.

This narrow focus isn’t accidental. The current system actively discourages broader awareness:

Developers are rewarded for technical excellence in isolation, not for understanding customer problems or business constraints. They’re measured on code quality and feature delivery, not on whether their work moves the business forward. They’re kept busy with technical tasks and rarely exposed to customer feedback, sales conversations, or strategic decisions.

Most critically, developers have next to zero influence or control over the way the work works—the system itself. They can’t change how requirements are gathered, how priorities are set, how teams communicate, or how decisions flow through the organisation. Yet they’re held responsible for whether all the Folks that Matter™ get their needs attended to.

Performance reviews focus on individual contributions rather than system-level thinking. Career advancement depends on demonstrating technical skill, not understanding how technology serves business objectives. The very structure of most organisations creates silos that prevent developers from seeing the bigger picture.

When Developers See the System

Everything changes when developers start understanding the wider system within which they function. They begin to realise that:

Beautiful code that solves the wrong problem is waste. Technical decisions ripple through customer support, sales, and operations in ways they never considered. That ‘simple’ feature request is actually complex when you understand the business context. They’ve been optimising for the wrong metrics because they couldn’t see what actually drives value for all the Folks that Matter™.

Developers who understand the system make fundamentally different choices. They push back on features that don’t align with the needs of the Folks that Matter™. They prioritise technical work that attends to the needs of the business rather than pursuing abstract perfection. They communicate differently with product managers because they understand the broader context of decisions.

The Real Constraints

The actual bottlenecks in software development are rarely technical—they’re systemic:

Communication breakdowns between product, design, and engineering teams lead to solutions that miss the mark. Feedback loops that take months instead of days prevent rapid iteration towards product-market fit. Decision-making processes filter out critical information from customers and frontline teams.

Requirements change constantly because there’s no clear product strategy or understanding of the needs of the Folks that Matter™. Teams work in isolation without understanding how their work connects to attending to those needs. Incentive systems reward shipping features over solving real problems.

Knowledge silos mean critical insights never reach the people who could act on them. Risk-averse cultures prevent the experimentation necessary for innovation. Metrics focus on activity rather than outcomes, creating busy work that doesn’t drive value.

Beyond Individual Excellence

The parallel to Deming’s insight is striking. Just as factory workers couldn’t improve quality by trying harder within a flawed system, developers can’t improve product outcomes by writing better code within dysfunctional organisational systems.

A team can follow every coding best practice religiously and still build something nobody wants. They can have 100% test coverage on features that solve the wrong problem. They can architect beautiful, scalable systems that scale to zero people who matter.

The solution isn’t to abandon technical excellence—it’s to recognise that individual excellence without system awareness is like being a skilled worker in the Red Bead experiment. Your efforts are largely irrelevant because the system constraints determine the outcome.

Building System Awareness

Organisations that want to improve how well they attend to the needs of the Folks that Matter™ need to help developers see and understand the wider system:

Expose developers to all the Folks that Matter™ through support rotations, research sessions, sales calls, and stakeholder meetings. Share context about why certain features matter and how technical decisions impact the people the system serves. Create feedback loops that connect code changes to how well needs are being attended to.

Measure system-level metrics like time from idea to value delivered to the Folks that Matter™, not just individual productivity. Reward cross-functional collaboration and understanding of the wider system, not just technical skill. Encourage questioning of requirements and priorities based on system-level thinking.

Make the invisible visible by sharing feedback from all the Folks that Matter™, competitive intelligence, and strategic context. Connect technical work to how well needs are being attended to through clear metrics and regular communication. Break down silos that prevent developers from understanding their role in the larger system.

The Path Forward

The tech industry’s obsession with coding practices isn’t just misplaced energy—it’s actively harmful when it distracts from the system-level changes that actually improve how well we attend to the needs of the Folks that Matter™. We need developers who understand that their job isn’t to write perfect code in isolation, but to create value within complex organisational and market systems.

This doesn’t mean abandoning technical excellence. It means recognising that technical excellence without system awareness is like perfecting your red bead drawing technique—a local optimisation that misses the point entirely.

The companies that succeed will be those that help their developers see beyond the code to understand all the Folks that Matter™, the market, the business model, and the organisational dynamics that actually determine whether their work creates value.

When developers start seeing the system, they stop optimising for red beads and start optimising for what actually matters. That’s when real improvement begins.

A Note on ‘Users’ and ‘Customers’

The conventional framing of ‘users’ and ‘customers’ is reductive and misses the point entirely. It treats software development like building a consumer app when most systems serve a complex web of stakeholders with different and sometimes conflicting needs.

Consider any real software system—an ERP platform must work for accountants entering data, executives reading reports, IT teams maintaining it, auditors reviewing it, vendors integrating with it, and regulators overseeing it. Calling them all ‘users’ flattens out completely different contexts and needs.

The ‘customer’ framing is even worse because it implies a simple transaction—someone pays money, gets product. But in most organisations, the people paying for software aren’t the ones using it day-to-day, and the people whose work gets impacted by it might not have had any say in the decision.

‘Folks that Matter™’ captures the messy reality that there are various people with legitimate stakes in whether the system works well. Developers are typically kept ignorant of who these people are, what they actually need, and how technical decisions affect them. It’s like the Red Bead experiment—workers are told to ‘satisfy the customer’ without any real understanding of what that means or who that customer actually is. Just another abstraction that keeps them focused on the wrong metrics.

Further Reading

Deming, W. E. (1986). Out of the crisis (pp. 345-350). MIT Press.

Deming, W. E. (1993). The new economics for industry, government, education (Chapter 7). MIT Press.

Scholtes, P. R. (1998). The leader’s handbook: Making things happen, getting things done. McGraw-Hill.

Wheeler, D. J. (2000). Understanding variation: The key to managing chaos (2nd ed.). SPC Press.

Womack, J. P., & Jones, D. T. (2003). Lean thinking: Banish waste and create wealth in your corporation (2nd ed.). Free Press.

Walls

What are the most insidious bugs in software development? They’re not the ones that crash your application—they’re the ones that crash your team’s ability to collaborate effectively. But how do these bugs manifest? As invisible walls between groups of developers, testers, UI and UX folks, etc. who, despite working towards the same ultimate goal, find themselves increasingly unable to communicate across the divides of their different assumptions, beliefs, specialisms, ingroups, and approaches.

But here’s the fundamental question: why should we care? What’s our motivation for tearing down these walls when they often provide us with identity, belonging, and professional security? After all, being ‘the React expert’ or ‘the DevOps person’ gives us a place in the world, a community to belong to, and expertise that others recognise. So why risk that comfort for the uncertain benefits of collaboration across tribes?

Before we go further, pause for a moment. When you read that last paragraph, what did you feel? Did a particular technology or methodology flash through your mind—one you champion or defend? Did you think ‘Well, I’m not tribal, but those Vue developers…’ or ‘I’m open-minded, but microservices really are better than monoliths’? That little voice? That’s where our story begins.

The Architecture of Division

How do these walls manifest in software development? They take many forms. There’s the classic divide between frontend and backend developers, where one group sees the other as either ‘not real programmers’ or ‘doesn’t understand user experience’. But why do DevOps engineers often find themselves separated from application developers? Does it come down to assumptions about whose responsibility it is to ensure code actually runs in production? And what about product managers and domain experts? They frequently operate in parallel universes, with each group convinced the other fundamentally misunderstands the business.

But have these divisions grown more complex recently? We’ve seen new walls emerge around technology choices. React developers dismissing Vue as ‘toy framework’, whilst Vue developers see React as ‘unnecessarily complex’. Microservices advocates view monolith supporters as stuck in the past, whilst monolith defenders see microservices enthusiasts as complexity addicts solving problems that don’t exist.

Stop here. Which of these resonated with you? Did you find yourself nodding along with one side and mentally dismissing the other? What was the last technical discussion where you felt your jaw clench when someone suggested an approach you disagreed with? Can you remember the exact moment when you stopped listening to understand and started listening to rebut?

What’s really happening when these divisions form? These aren’t just about technical preferences—they’re about identity. When a developer says ‘I’m a Python person’ or ‘I’m a functional programming advocate’, are they just describing their skills? Or are they signalling membership in a tribe with its own values, assumptions, and ways of seeing problems?

The Tolerance Deficit

But what makes these walls particularly dangerous today? It’s the growing intolerance for alternative viewpoints within our field. How did we get here? Social media and online communities have created echo chambers where developers primarily interact with others who share their technical beliefs. What’s the result? A kind of ideological brittleness where encountering different approaches triggers defensive reactions rather than curiosity.

Where do we see this playing out? It shows up in code reviews that become battles over style rather than substance. It appears in architectural discussions where alternatives are dismissed without genuine consideration. It manifests in hiring processes where cultural fit becomes a euphemism for ‘thinks like we do’.

Think about your last code review. When you saw an approach that differed from what you would have done, what was your first instinct? To understand why they chose that path, or to suggest your preferred alternative? When was the last time you changed your mind about a technical decision based on someone else’s argument? If you can’t remember, what might that tell you?

Isn’t there an irony here? We’re building increasingly sophisticated systems for connecting people across the globe, whilst simultaneously becoming less capable of connecting with colleagues who use different frameworks or prefer different paradigms.

The Cost of Our Walls

What toll do these barriers extract from our work? Teams fragment into silos, leading to duplicated effort and incompatible solutions. Knowledge sharing breaks down, leaving each group to rediscover lessons others have already learnt. Decision-making becomes political rather than technical, with choices made based on which group has more influence rather than which approach best serves the needs of the Folks That Matter™.

But what’s perhaps most damaging? These walls prevent us from learning from each other. Consider: what happens when a backend developer has never tried to make a responsive layout? They might design APIs that make frontend work unnecessarily difficult. What about the frontend developer who’s never wrestled with database performance? They might build interfaces that require impossible data loads. And the DevOps engineer who’s never debugged application code? They might create deployment processes that obscure rather than illuminate problems.

Yet these costs often feel abstract—organisational inefficiencies that someone else worries about. So what’s the personal cost? What happens when you’re stuck debugging a problem for days, only to discover that someone from a different tribe could have solved it in minutes? What about when your career advancement stalls because you’re too narrowly specialised for the problems your organisation actually faces? Or when the satisfaction slowly drains from your work because you’re endlessly fighting the same battles with the same people about the same approaches?

When did you last feel truly stuck on a problem? Who did you ask for help? Were they people who think about problems the same way you do, or did you seek out someone with a fundamentally different perspective? What stopped you from reaching across tribal lines—was it pride, assumptions about their knowledge, or simply not knowing who to ask?

The Paradox of Identity

But let’s be honest about why these walls persist. They serve important psychological functions. Being ‘a Python person’ or ‘a functional programming advocate’ isn’t just about describing skills—it’s about having a professional identity in a field that changes so rapidly that expertise becomes obsolete overnight. These tribal affiliations provide stability, community, and recognition in an otherwise chaotic landscape.

Why would we give up that certainty? Specialisation feels safer than generalisation. Having strong opinions about the ‘right way’ to do things reduces decision fatigue and provides cognitive comfort. Being an evangelist for a particular approach can bring conference talks, blog readership, and professional recognition. There’s real social capital in being a thought leader for your tribe.

So the question isn’t whether these walls serve a purpose—they clearly do. The question is whether they’re serving us well in the long term, or whether we’re trading short-term comfort for long-term growth and effectiveness.

What would you lose if you became known as someone who doesn’t have strong technical opinions? How much of your professional confidence comes from being ‘the expert’ in your particular domain? If someone introduced you at a conference, what would they say about you—and how much of that identity is tied to specific technologies or methodologies? What scares you more: being wrong about a technical choice, or admitting you don’t know something?

Common Ground in Shared Experience

What if our shared commitment to attending to folks’ needs could provide common ground for bridging these divides? But perhaps we need to be more honest about what we actually share. Not everyone experiences or expresses empathy in the same way. Not everyone naturally thinks in terms of ‘human needs’ or picks up on social cues easily. Many of us are simply wired differently when it comes to interpersonal dynamics.

But there’s something we might share more universally: the experience of being blocked. Of having clear requirements that keep changing. Of being held responsible for outcomes we can’t control. Of receiving vague or contradictory instructions. Of having our work dependencies managed by people who don’t understand what we actually need to get things done.

The product manager who asks for ‘just a quick change’ without understanding the technical implications. The stakeholder who wants to know ‘how long it will take’ for something that’s never been done before. The designer who creates interfaces that look great but are technically impossible to implement efficiently. The executive who wants to know why the team isn’t moving faster without understanding what’s actually slowing them down.

Can you think of a time when you felt truly stuck—not because of technical complexity, but because of poor communication, unclear requirements, or unrealistic expectations? What made that situation particularly frustrating? Was it the ambiguity? The lack of clear decision-making authority? The sense that people were making demands without understanding the constraints you were working within?

Some of us might interpret these situations as ‘people problems’ or ‘communication issues’. Others might see them as ‘process failures’ or ‘requirements management problems’. The language we use might differ, but the experience of being blocked by preventable obstacles might be more universal.

What would the most effective teams look like? Perhaps not those where everyone processes information the same way, but those where different thinking styles are recognised and accommodated. Where the person who needs explicit requirements can get them, and the person who thinks in systems can share their perspective without being dismissed as ‘overthinking’. Where disagreements happen within clear frameworks rather than endless ambiguous discussions.

What If We Dared to Be Curious?

So what’s the solution? It isn’t to eliminate all technical preferences or pretend that all approaches are equally valid for every situation. But what if we developed what we might call ‘intellectual humility’—the recognition that our own perspective, however well-reasoned, is still just one view of a complex landscape?

What would it look like to approach technical discussions with genuine curiosity about why others have reached different conclusions? Before dismissing a colleague’s preferred tool or methodology, what if we asked: what problems does it solve that our approach doesn’t handle well? What if we sought to understand the context that makes their solution optimal, even if it wouldn’t work in our situation?

What would happen if we actively sought out diverse perspectives? If your team consists entirely of people who think about problems the same way, what insights might you be missing? What if that discomfort you feel when your assumptions are challenged is actually a sign that you’re about to learn something valuable?

Here’s a small experiment: In your next technical discussion, before you speak, pause and ask yourself: am I about to share knowledge, or am I about to defend territory? When someone suggests an approach you disagree with, can you find one thing about it that’s genuinely interesting or clever, even if you wouldn’t use it yourself? What would it feel like to say ‘I hadn’t thought of that’ instead of ‘But what about…’?

Tearing Down Walls, Not Building Them

As our industry continues to evolve at breakneck speed, what skill will become increasingly valuable? The ability to work across different technical and social cultures. Why? Because the problems we’re trying to solve—scaling systems, securing data, creating intuitive experiences—are too complex for any single perspective to address completely.

So what might we do differently? Rather than building higher walls around our preferred approaches, what if we worked to make them more permeable? Does this mean abandoning our technical principles? Perhaps not, but what if we held them more lightly, remaining open to the possibility that context might call for different solutions?

Who will inherit the future? What if it belongs to teams that can synthesise insights from multiple technical traditions, rather than those that retreat into increasingly narrow orthodoxies? What would happen if we recognised and actively worked to dismantle the walls between different groups of developers? Might we not just build better software, but model the kind of thoughtful, collaborative problem-solving our industry desperately needs?

What will outlast the current debates? The code we write today will outlast many of the frameworks and philosophies we’re currently debating. But what will shape not just our individual careers, but the entire culture of our field? What if it’s the habits of mind we develop—whether we choose curiosity over certainty, collaboration over competition?

One final question to sit with: What kind of developer do you want to be remembered as? The one who was always right about their preferred technology stack, or the one who helped others think more clearly about complex problems? The one who won debates, or the one who built bridges? The choice, as they say, is always ours to make.

Further Reading

Brooks, F. P. (1995). The mythical man-month: Essays on software engineering (Anniversary ed.). Addison-Wesley.

Coyle, D. (2018). The culture code: The secrets of highly successful groups. Bantam Books.

DeMarco, T., & Lister, T. (2013). Peopleware: Productive projects and teams (3rd ed.). Addison-Wesley.

Fournier, C. (2017). The manager’s path: A guide for tech leaders navigating growth and change. O’Reilly Media.

Grant, A. (2021). Think again: The power of knowing what you don’t know. Viking.

Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. Pantheon Books.

Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.

Larson, W. (2021). Staff engineer: Leadership beyond the management track. O’Reilly Media.

McChrystal, S., Collins, T., Silverman, D., & Fussell, C. (2015). Team of teams: New rules of engagement for a complex world. Portfolio.

Putting Folks’ Needs First – Skip the User Stories vs Use Cases Debate

The software industry spends an enormous amount of energy debating practices—user stories versus use cases, agile versus waterfall, documentation versus conversation. Meanwhile, the people who actually matter—the ones who will use, buy, build, maintain, and profit from our software—are often afterthoughts in these discussions.

It’s time to flip the script. Instead of starting with methodology and hoping it serves people’s needs, let’s start with the Folks That Matter™ and choose our approaches accordingly. This is what the Antimatter principle calls “attending to folks’ needs”—recognising that the value of any practice lies entirely in how well it serves real folks’ actual needs. Let’s can the endless debating by attending to folks’ needs.

Who Are Your Folks That Matter™?

Before you write your first user story or draft your first use case, pause and identify who actually needs to understand and act on your work. These aren’t abstract roles—they’re real people with specific needs, constraints, and ways of thinking.

Sarah, the product manager, thinks in user journeys and business outcomes. She needs to understand how features connect to customer value, competition, and revenue impact. Dense technical specifications make her eyes glaze over, but she can instantly spot when a user story misses a crucial business rule.

Marcus, the lead developer, needs enough detail to identify technical risks and understand how new features interact with existing systems. He’s been burnt by vague requirements that seemed clear in meetings but fell apart during implementation. Interestingly, Marcus has embraced the movement—he’s found that detailed story point estimation often becomes an end in itself, consuming time that could better be spent actually building software. He prefers breaking work into small, similar-sized pieces that flow predictably.

Katarzyna, the compliance officer, must ensure the product meets regulatory requirements. She needs traceable documentation that auditors can review. Conversational approaches that leave decisions undocumented create legal risks she can’t accept.

Jennifer, the customer success manager, deals with confused users when needs miss real-world scenarios. She has to understand not just what the software should do, but what users might expect it to do based on their mental models.

Each of these people has legitimate needs. The question isn’t which methodology is ‘right’—it’s how to serve all the Folks That Matter™ effectively. As the Antimatter principle reminds us, any practice that doesn’t attend to folks’ needs is waste, regardless of how theoretically sound it might seem.

When teams, and indeed organisations, focus on attending to folks’ needs rather than defending methodological positions, the endless debates about user stories versus use cases simply evaporate. The answer becomes obvious: use whatever works for the specific people and their specific needs, in your specific context.

Matching Methods to People’s Needs

When your Folks That Matter™ need exploration and alignment, user stories excel. The product manager who’s still figuring out what customers really want benefits from the conversation-starting nature of story cards. The development team discovering technical constraints needs the flexibility to evolve requirements as they learn.

Sarah’s team was building a new invoicing feature. They started with a simple story: ‘As a small business owner, I want to send professional invoices so that I get paid faster.’ This sparked conversations about payment terms, tax calculations, and branding options that no one had considered upfront. The story evolved through dialogue, and the final feature was far richer than anything they could have specified initially.

Marcus particularly appreciated this approach because it aligned with his philosophy. Rather than spending hours estimating a vague story, the team broke it into small, discoverable pieces that they could complete in a day or two. The predictable flow of small stories gave Sarah the planning visibility she needed without the overhead of detailed estimation ceremonies.

When your Folks That Matter™ need precision and accountability, use cases provide the structure they require. The compliance officer who must demonstrate regulatory adherence needs documented workflows with clear preconditions and outcomes. The offshore development team working across time zones needs detailed scenarios they can implement without constant clarification calls.

Katarzyna’s team was building patient data access controls. A user story like ‘As a doctor, I want to access patient records so that I can provide care’ was legally meaningless. They needed use cases that specified exactly which roles could access what data under which circumstances, with full audit trails. The systematic format of use cases made regulatory review straightforward.

When your Folks That Matter™ have different thinking styles, provide multiple views of the same requirements. Don’t force the visual thinker to work with text-heavy use cases or make the detail-oriented analyst guess at implementation specifics from high-level stories.

Marcus and Sarah worked together by starting with story mapping to visualise the user journey, then drilling down into detailed use cases for complex workflows. Sarah could see the big picture and business logic, whilst Marcus got the implementation details he needed. Same requirements, different representations.

Notice how none of these decisions required theological arguments about methodology. Each choice served specific people’s specific needs. Attending to folks’ needs cuts through the debate noise.

The Reality Check

The movement highlights a crucial insight: detailed requirements often become proxies for prediction rather than tools for understanding. Teams can spend enormous effort estimating user stories with story points, planning poker, and velocity calculations, but these estimates rarely improve delivery predictability and often distract from actually building software.

Marcus’s team discovered that when they focused on making stories consistently small rather than accurately estimated, their delivery became more predictable. Instead of debating whether a feature was 5 or 8 story points, they asked whether it could be broken into e.g. artefacts that could each be completed in a day or two. This shift changed how they captured folks’ needs —less focus on comprehensive upfront specification, more focus on just-enough detail to start work confidently. See also: the Needsscape.

This doesn’t mean abandoning planning entirely. Sarah still needed roadmap commitments and budget forecasts. But the team found they could provide better predictions by counting delivered stories over time rather than summing estimated story points. Their artefacts became lighter and more focused on enabling flow rather than feeding estimation ceremonies.

The endless debates about estimation versus dissolve when you ask: what do our Folks That Matter™ actually need for planning and coordination? Often, it’s predictable delivery more than precise estimates.

The Misuse Case Reality Check

Here’s where focusing on Folks That Matter™ becomes crucial: the people who deal with software problems aren’t usually the ones writing requirements. Jennifer in customer success fields calls when users accidentally delete important data. The security team deals with the aftermath when features are misused maliciously.

These voices often aren’t heard during needs capture and evolution, but they represent critical Folks That Matter™. Building ‘misuse cases’ into your process—whether you’re using stories or formal use cases—ensures you’re serving the people who have to deal with problems, not just the ones who use features successfully.

Jennifer pushed her team to consider stories like ‘As a malicious user, I want to exploit the file upload feature so that I can access other users’ data’ and ‘As a confused user, I want to understand why my action failed so that I can correct my mistake.’ These weren’t happy path features, but they prevented real problems for real people.

The Antimatter principle particularly applies here: security reviews and error handling often feel like bureaucratic overhead, but they directly serve the needs of people who deal with the consequences of product failures.

Documentation vs Conversation: Serving Different Needs

The agile manifesto’s preference for ‘individuals and interactions over processes and tools’ doesn’t mean documentation is evil—it means putting people first. Sometimes the Folks That Matter™ need rich conversation to discover what they really need. Sometimes they need comprehensive documentation to do their jobs effectively.

Conversation serves discovery. When your product manager is exploring new market opportunities or your development team is prototyping technical approaches, dialogue-heavy user stories facilitate learning and adaptation.

Documentation serves execution and accountability. When your distributed team needs to implement complex business rules or your compliance officer needs to demonstrate regulatory adherence, written specifications provide the clarity and traceability required.

The most effective teams recognise that these aren’t competing approaches—they’re different tools for serving different people at different times. The Antimatter principle’s “attend to folks’ needs” helps teams avoid dogmatic adherence to either extreme.

The endless documentation versus conversation debates end when you focus on what your specific people need to do their jobs effectively.

Timing That Actually Works for People

The ‘up front versus evolutionary’ debate often ignores the reality of how different Folks That Matter™ actually work. Product managers need enough certainty to make roadmap commitments. Developers need enough detail to minimise rework. Operations teams need enough notice to prepare infrastructure.

Instead of choosing between comprehensive upfront planning and just-in-time discovery, map your requirements approach to the actual decision-making needs of your stakeholders.

Identify architectural decisions early because they affect everyone downstream. The integration approach that seems like an implementation detail to the product manager might require months of infrastructure work from the operations team.

Keep UI and workflow details evolutionary because these benefit from user feedback and technical learning. The exact button placement that seems critical upfront often changes once users actually interact with early versions.

Document agreements when they affect multiple teams because people need to coordinate their work. The API contract between frontend and backend teams needs to be explicit, even if the user story that drives it remains flexible.

This timing approach aligns well with thinking: instead of trying to estimate everything upfront, identify what decisions must be made early and defer the rest until you have better information.

When you attend to folks’ needs, the timing becomes obvious. No more theoretical arguments about waterfall versus agile—just practical decisions about when different people need different information.

Making It Work: Practical Steps

Start with your Folks That Matter™ inventory. List the real people who need to understand, implement, test, support, and approve your software. Understand their constraints, preferences, and success criteria.

Match your methods to their needs. Use stories when stakeholders need to explore and align. Use cases when they need to implement and verify. Use both when you have both types of needs.

Question estimation ceremonies. Ask whether detailed story point estimation actually serves your Folks That Matter™ or just creates busy work. Consider focusing on consistent story size rather than accurate estimation.

Create feedback loops with the people who live with the consequences. Regular check-ins with customer support, security teams, and operations prevent requirements that look good on paper but fail in practice.

Evolve your approach as your team learns. The startup exploring product-market fit needs different requirements approaches than the enterprise team maintaining critical systems. Let your methods serve your current reality, not your methodology preferences.

Stop the methodological debates. When teams start arguing about the “right” way to write requirements, refocus on the Folks That Matter™. Ask: “Who needs this information, and how do they prefer to receive it?”

The Real Test

The ultimate test of your approach isn’t methodological purity—it’s whether the Folks That Matter™ can successfully do their jobs. Can the product manager make informed decisions? Can the developer implement features correctly? Can the support team help confused users? Can the compliance officer satisfy auditors?

The Antimatter principle provides a simple filter: does this practice attend to folks’ needs? If your user stories help stakeholders align and discover needs, they’re valuable. If they become exercises in elaborate estimation that don’t improve delivery, they’re waste. If your use cases provide necessary precision for implementation and compliance, they’re essential. If they become bureaucratic documentation that nobody reads, they’re overhead.

When you put people first, the user stories versus use cases debate becomes much simpler. You use whatever approaches help your specific stakeholders succeed in their specific contexts. Sometimes that’s collaborative story discovery. Sometimes it’s systematic use case documentation. Most often, it’s a thoughtful combination that serves different people’s different needs.

The approach matters far less than the people. Make sure your approach serves the Folks That Matter™, and the rest will follow. Can the endless debating by attending to folks’ needs—because when you focus on serving real people’s real needs, the “right” answer becomes obvious for your context.

Based on my verification, I found several issues with the citations I originally provided. Let me create a corrected Further Reading section with properly verified citations:

Further Reading

User Stories and Agile Requirements

Cao, L., & Ramesh, B. (2008). Agile requirements engineering practices: An empirical study. IEEE Software, 25(1), 60-67. https://doi.org/10.1109/MS.2008.9

Cohn, M. (2004). User stories applied: For agile software development. Addison-Wesley Professional.

Lucassen, G., Dalpiaz, F., van der Werf, J. M. E. M., & Brinkkemper, S. (2015). Forging high-quality user stories: Towards a discipline for agile requirements. In 2015 IEEE 23rd International Requirements Engineering Conference (RE) (pp. 126-135). IEEE.

Use Cases and Requirements Engineering

Cockburn, A. (2001). Writing effective use cases. Addison-Wesley Professional.

Jacobson, I. (1992). Object-oriented software engineering: A use case driven approach. Addison-Wesley Professional.

Pohl, K. (2010). Requirements engineering: Fundamentals, principles, and techniques. Springer.

Movement

Duarte, V. (2015). NoEstimates: How to measure project progress without estimating. Oikosofy.

Killick, N., Duarte, V., & Zuill, W. (2015). No estimates – How to deliver software without guesswork. Leanpub.

The Antimatter Principle

Marshall, B. (2014, May 22). Q&A with Bob Marshall about the Antimatter Principle. InfoQ. https://www.infoq.com/news/2014/05/antimatter-principle/

Empirical Studies and Academic Research

Inayat, I., Salim, S. S., Marczak, S., Daneva, M., & Shamshirband, S. (2015). A systematic literature review on agile requirements engineering practices and challenges. Computers in Human Behavior, 51, 915-929. https://doi.org/10.1016/j.chb.2014.10.046

From Dawn Till Dusk

Reflections on a 50+ Year Career in Software

The Dawn: Programming as Pioneering (1970s)

When I first touched a computer in the early 1970s, programming wasn’t just a job—it was exploration of uncharted territory. We worked with punch cards and paper tape, carefully checking our code before submitting it for processing. A single run might take hours or even overnight, and a misplaced character meant starting over. Storage was 5MByte disk packs, magnetic tapes, more punch cards, and VRC (visible record cards with magnetic stripes on the reverse).

The machines were massive, expensive, and rare. Those of us who could communicate with these behemoths were viewed almost as wizards, speaking arcane languages like FORTRAN, COBOL, Assembler, and early versions of BASIC. Computing time was precious, and we spent more time planning our code on paper than actually typing it.

The tools were primitive by today’s standards, but there was something magical about being among the first generation to speak directly to machines. We were creating something entirely new—teaching inanimate objects to think, in a way. Every problem solved felt like a genuine discovery, every program a small miracle.

The Dusk: The AI Inflection Point (2020s)

In recent years, I’ve witnessed a most profound shift. Machine learning and AI tools have begun to automate aspects of programming we once thought required human creativity and problem-solving. Large language models can generate functional code from natural language descriptions, debug existing code, and explain complex systems.

The pace of change has been breathtaking. Just five years ago, we laughed at the limitations of code-generation tools. Remember Ambase? Or The Last One? Today, junior programmers routinely complete in minutes what would have taken days of specialised knowledge previously.

As I look forward, I can’t help but wonder if we’re witnessing the twilight of programming as we’ve known it. The abstraction level continues to rise—from machine code to assembly to high-level languages to frameworks to AI assistants to …? Each step removed programmers further from the machine while making software creation more accessible.

The traditional career path seems to be narrowing. Entry-level programming tasks are increasingly automated, while senior roles require deeper system design and architectural thinking. And, God forbid, people skills. The middle is being hollowed out.

Yet I remain optimistic. Throughout my career, development has constantly reinvented itself. What we call “programming” today bears little resemblance to what I did in the 1970s. The fundamental skill—translating human needs into machine instructions—remains valuable, even as the mechanisms evolve.

If I could share advice with those entering the field today: focus on attending to folks’ needs, not on coding, analysis, design; seek out change rather than just coping passively with it; understand systems holistically; develop deep people knowledge; and remember that technology serves humans, not the other way around.

Whatever comes next, I’m grateful to have witnessed this extraordinary journey—from room-sized computers with kilobytes of memory to AI systems that can code alongside us and for us. It’s been a wild ride participating in one of humanity’s most transformative revolutions.

From Operational Value Streams to Prod•gnosis

Connecting Allen Ward and Bob Marshall’s Product Development Philosophies

A thoughtful exploration of two complementary approaches to transforming product development

Introduction

In the world of product development theory, two complementary approaches stand out for their innovative thinking about how organisations might tackle the creation of new products: Dr Allen Ward’s approach, born of many years researching the Toyota approach, and my own approach, which I’ve named Prod•gnosis

While Dr. Ward’s work on operational value streams emerged from his extensive study of Toyota’s product development system, Prod•gnosis builds upon and extends his ideas into a comprehensive framework focused on organisational transformation for better product development, reduced costs, and more appealing products.

This post explores the connections between these two approaches and how, together, they offer a powerful lens for fundamentally rethinking product development.

The Foundation: Allen Ward’s Operational Value Streams

Allen Ward’s core insight, which has become a cornerstone of lean product development e.g. TPDS, is elegantly simple yet profound:

“The aim of development is, in fact, the creation of profitable operational value streams.”

An operational value stream (OVS) represents the set of steps that deliver a product or service directly to the customer (and others). This includes activities like manufacturing a product, fulfilling an order, providing a loan, or delivering a professional service.

Ward’s work, drawing from his decade of direct research at Toyota, showed that effective product development isn’t just about designing isolated products. Rather, it’s about designing the entire system through which those products will be manufactured, shipped, sold, and serviced. This holistic approach explains much of Toyota’s success in bringing new products to market quickly and profitably.

Ward emphasised that creating profitable operational value streams requires:

  1. A “whole product” approach that involves every area of the business
  2. Knowledge creation as the central activity of product development
  3. The use of tools like trade-off curves for decision-making and teaching
  4. Systematic waste elimination throughout the development process

Prod•gnosis: Building on Ward’s Foundation

I’m delighted to acknowledge my intellectual debt to Dr. Ward. In my writings on Prod•gnosis, I directly reference Dr. Ward’s influence, adopting his view of “business as a collection of operational value streams.”

I define Prod•gnosis (a portmanteau of “Product”, and “Gnosis” meaning knowledge) as a specific approach to product development that places the creation of operational value streams at its centre. However, Prod•gnosis extends Dr. Ward’s thinking in several notable ways:

The Product Development Value Stream (PDVS)

Prod•gnosis introduces the concept of a dedicated “Product Development Value Stream” (PDVS) as a distinct organisational capability responsible for creating and instantiating operational value streams. I previously wrote:

“I suggest the most effective place for software development is in the ‘Product Development Value Stream’ (PDVS for short) – that part of the organisation which is responsible for creating each and every operational value stream.”

This represents a significant organisational shift from traditional department-based structures.

Challenging IT’s Role in Product Development

Prod•gnosis particularly questions the conventional role of IT departments in product development. Prod•gnosis argues that software development does not belong in IT departments but instead is much more effective when situated within the Product Development Value Stream:

“If we accept that the IT department is poorly suited to play the central role in a Prod•gnosis-oriented organisation, and that it is ill-suited to house or oversee software development (for a number of reasons), then where should software development ‘sit’ in an organisation?”

The answer is clear: within the PDVS, where it can directly contribute to creating operational value streams.

Incremental Implementation

Prod•gnosis proposes a “Lean Startup-like approach” to implementing operational value streams:

“I’m thinking more in terms of a Lean Startup-like approach – instantiating version 0.1 of the operational value stream as early as possible, conducting experiments with its operation in delivering an MVP (even before making its 1.0 product line available to buying customers), and through e.g. kaizen by either the product development or – the few, early – operational value stream folks (or both in collaboration), incrementally modifying, augmenting and elaborating it until the point of the 1.0 launch, and beyond.”

This represents a pragmatic approach to putting Dr. Ward’s principles into practice.

Key Points of Alignment

Despite their different emphases, Ward and Prod•gnosis’ approaches share significant philosophical alignment:

1. Value Stream-Centric View

Both view business fundamentally as a series of operational value streams, with product development focused on creating and improving these streams rather than just designing isolated products.

2. Whole Product Approach

Both emphasise the importance of involving all aspects of a business in product development. Prod•gnosis references Toyota’s “Big Rooms” (Obeya), which Ward studied extensively, as an example of effective cross-functional collaboration.

3. Systems Thinking

Both reject piecemeal improvements and advocate for fundamental shifts in organisational perspective. As Ward wrote and Prod•gnosis quotes: “Change will occur when the majority of people in the organisation have learned to see things in a new way.”

And see also: Organisational Psychotherapy as a means to help organisations see things in a new way.

4. Flow Focus

Both emphasise the importance of flow in product development, with Prod•gnosis particularly focused on aspects like flow rate, lead time, cycle time, and process cycle efficiency – both of the PVDS and the OVSs.

Practical Applications of the Combined Approach

Organisations seeking to apply these ideas might consider:

  1. Creating a dedicated Product Development Value Stream responsible for designing and implementing operational value streams (a.k.a. new products)
  2. Removing software development from IT departments and placing it within the PDVS
  3. Adopting a “whole product” approach that brings together all business functions in the service of product development
  4. Implementing early versions of operational value streams viw the PVDS, and then iteratively improving them
  5. Measuring and optimising flow through the product development process

Getting There: Transitioning to Prod•gnosis

Moving from conventional product development approaches to a Prod•gnosis model represents a significant organisational transformation. As Prod•gnosis acknowledges,

“getting there from here is the real challenge”

The transition requires more than just structural or process changes—it demands a fundamental shift in collective mindset.

The Challenge of Organisational Transformation

The Lean literature is replete with stories of organisations failing to move from vertical silos to horizontal value streams. Prod•gnosis presents additional challenges by proposing to remove software development from IT departments and create an entirely new organisational capability (the PDVS).

As Ward wisely noted and Prod•gnosis quotes:

“Change will occur when the majority of people in the organisation have learned to see things in a new way.”

This insight highlights that sustainable transformation depends on shifting collective beliefs rather than merely implementing new processes.

Organisational Psychotherapy as a Path Forward

In Organisational Psychotherapy I propose as a methodical approach to shifting collective assumptions and beliefs. As an Organisational Psychotherapist, I apply psychotherapy techniques not just to individuals but to entire organisations.

OP recognises that organisations, like individuals, operate based on deep-seated assumptions and beliefs—i.e. “memeplexes” These collective mental models determine how an organisation functions and often unconsciously resist change. And see my book “Hearts over Diamonds” (Marshall, 2018) for more in-depth discusion of memeplexes.

Organisational Psychotherapy works by:

  1. Helping organisations become aware of their current collective beliefs (surfacing)
  2. Examining how these beliefs serve or hinder effectiveness (reflecting)
  3. Supporting the organisation in exploring new, more productive mental models
  4. Facilitating the adoption of these new models

For organisations seeking to move toward Prod•gnosis, this might involve addressing fundamental beliefs about:

  • The nature and purpose of product development
  • The relationship between software development and IT
  • The definition of “whole product”
  • The organisation’s relationship with customers and all the Folks That Matter™
  • How value flows through the organisation

As Prod•gnosis emphasises, this isn’t a quick fix. The transformation to Prod•gnosis represents a significant evolution in how organisations think about and structure product development. The journey requires patience, persistence, and a willingness to examine and change foundational assumptions about how product development might work significantly better.

Conclusion

The synthesis of Allen Ward’s operational value stream concept and Prod•gnosis offers a powerful framework for rethinking product development. By viewing product development as the creation of complete operational value streams and establishing organisational structures that support this perspective, organisations can potentially achieve the kind of rapid, profitable product development that Toyota has demonstrated.

As more organisations struggle with digital transformation and the ever-increasing importance of software in product development, these two complementary approaches may provide a valuable roadmap for fundamentally rethinking how products are developed and brought to market.


What are your thoughts on the operational value stream approach to product development? Have you seen examples of it in practice? I’d love for you to share your experiences in the comments below.

Further Reading

For those interested in exploring these concepts further, the following resources might provide some useful insights:

Ward, A. C. (2007). Lean product and process development. Cambridge, MA: Lean Enterprise Institute.

Sobek, D. K., & Ward, A. C. (2014). Lean product and process development (2nd ed.). Cambridge, MA: Lean Enterprise Institute.

Lean Enterprise Institute. (2021). Lean product and process development: Introduction. https://www.lean.org/wp-content/uploads/2021/01/lean-product-and-process-development-introduction.pdf

Marshall, B. (2012, August 4). Prod•gnosis in a nutshell. Think Different. https://flowchainsensei.wordpress.com/2012/08/04/prodgnosis-in-a-nutshell/

Marshall, B. (2013, February 12). Product development flow. Think Different. https://flowchainsensei.wordpress.com/2013/02/12/product-development-flow/

Kennedy, M. N. (2003). Product development for the lean enterprise: Why Toyota’s system is four times more productive and how you can implement it. Richmond, VA: Oaklea Press.

Reinertsen, D. G. (2009). The principles of product development flow: Second generation lean product development. Redondo Beach, CA: Celeritas Publishing.

Marshall, R.W. (2018). Hearts over diamonds: Serving business and society through organisational psychotherapy. Falling Blossoms

The Ocean

An Alternative Metaphor to the Forest and Desert

Never one to take an idea as-is, here’s my extension to the Forest and Desert metaphor: the Ocean. On the ocean, landlubbers are all at sea. And it takes some time to find one’s sea legs. More prosaically, the Ocean suggests land-oriented metaphors miss the point: , attendants attending to folks’ needs, rather than developers developing software, etc.

The Limitations of Land

The Forest and Desert metaphor, conceived by Beth Andres-Beck and her father Kent Beck, offers us a powerful way to understand the divide between different software development approaches. The Desert (working inside a joyless analytically-minded organisation) represents the harsh reality many teams face: scarce resources, plentiful bugs, uncultivated skills, and difficult communications with users. The Forest, meanwhile, depicts the lush environment of well-run teams using practices like Extreme Programming, where changes flow swiftly into production, protected by tests, code is nurtured, and there’s regular contact with The Customer.

Yet I wonder if land-based metaphors, however illuminating, ultimately constrain our thinking? Actually, I’m dead certain they do.

Setting Sail for New Horizons

What if we ventured beyond the confines of land altogether? What if the most advanced software development approaches weren’t about making better forests but about learning to navigate an entirely different element—the Ocean?

The Ocean isn’t merely an extension of the Forest; it represents a paradigm shift where the very notion of “software development” begins to dissolve.

The Ocean Paradigm

Leaving Land Behind

In the Ocean paradigm, we’re no longer Forest Dwellers trying to convert Desert Dwellers. Instead, we’re sailors who’ve recognised that the most progressive teams have left the constraints of land entirely. The Ocean represents a radical alternative to any kind of software development—where software itself is downplayed or even disappears in favour of attending to folks’ needs directly. The true challenge isn’t converting Desert to Forest; it’s helping land-dwellers understand that the future lies offshore, beyond software altogether.

Landlubbers All at Sea

When Desert Dwellers visit a Forest, they may feel uncomfortable, but they still recognise the ground beneath their feet. When they encounter an Ocean team, however, they’re utterly disoriented—they’re “all at sea.” The language, practices, and mindsets seem not merely different but alien.

The Ocean team doesn’t talk about “developing software” but about “attending to folks’ needs.” They don’t discuss “requirements” but “who matters?” They don’t “deploy code” but “deliver value.”

Finding Your Sea Legs

Just as sailors need time to adjust to the constant motion of a ship, newcomers to Ocean teams need time to develop their “sea legs.” This adaptation isn’t merely about learning new practices; it’s about fundamentally changing one’s relationship to stability, certainty, and control.

In the Ocean, change isn’t a disruption to be managed but the medium in which we exist. Uncertainty isn’t a risk to be mitigated but a reality to be embraced.

Ocean Practices: Beyond Software Development

As Steve Jobs observed

“The line of code that’s the fastest to write, that never breaks, that doesn’t need maintenance, is the line you never had to write.”

In the Ocean paradigm, this insight takes on profound significance—the best software is often no software at all.

In this worldview, software is no longer the product; it’s merely the medium, and oftentimes an unnecessary one. Ocean teams don’t focus on “building better software” but on “better meeting folks’ needs”. Software is merely the water through which value flows, and the less of it needed, the better.

When a team reaches true Ocean-thinking, they paradoxically care less about the software itself and more about the needs it meets. They’re ruthlessly people-focussed, willing to discard elegant code or sophisticated architecture if a simpler approach better serves the need—or to eliminate code entirely when a non-software solution would work better.

Attendants, Not Developers

In this paradigm, we’re not “developers” but “attendants“—our role isn’t to build things but to attend to needs. We’re not constructing a product but facilitating a service.

This shift in identity is profound. The attendant doesn’t ask, “How do I build this feature?” but “How do I attend to folks’ needs?” The first question assumes that software is the answer; the second remains open to all possibilities.

Fluid Architecture

Ocean architectures aren’t rigid structures but fluid arrangements that flow and adapt. They’re not designed once and built to last; they’re constantly evolving, components washing in and out as needs change.

In the Ocean, microservices aren’t an architectural style but a natural expression of fluid boundaries. Systems aren’t “decomposed” into services; they naturally arrange themselves around folks and their needs.

The Ocean’s Challenges

The Vastness

The Ocean is vast and can be overwhelming. Without the familiar landmarks of land, newcomers often feel lost. The freedom that comes with Ocean thinking can be paradoxically paralyzing—with so many possibilities, where does one begin?

The Storms

The Ocean isn’t always calm. Market changes, emerging technologies, and evolving user needs can create perfect storms that test even the most seasoned crews. Unlike Forest teams, who can find shelter under the canopy, Ocean teams must learn to sail through storms, sometimes changing course entirely.

As John Shedd observed:

“A ship in harbor is safe — but that is not what ships are built for.”

i.e. A developer writing code might feel safe, but that’s not what developers are for.

The Depths

Beneath the surface lie depths that few explore. The technical implications of truly embracing the Ocean mindset go far beyond conventional practices. Concepts like joy in work, social dynamics, and collaborative knowledge work take on new meanings in this context.

Navigating Between Worlds

As someone who’s sailed these waters, I find myself in an interesting position. I can speak the language of Desert Dwellers and Forest Dwellers, but my heart is with the Ocean. I work to help teams not just to create better Forests but to prepare for their voyage to sea, where they might discover that the most elegant solution is often the absence of software itself—a recognition that the best line of code is frequently the one never written.

The journey from Desert to Forest is challenging but well-documented. The voyage from Forest to Ocean is less charted and requires not just new practices but new metaphors, new language, and new ways of thinking and being.

Conclusion: Beyond Metaphors

Perhaps the most profound insight from the Ocean metaphor is that we might choose to hold all metaphors lightly. The Desert, Forest, and Ocean are not realities but lenses through which we view reality. The most advanced teams know when to use each lens and when to set them all aside.

The true masters aren’t wedded to being Desert Dwellers, Forest Dwellers, or even Ocean Navigators. They’re simply pathfinders, using whatever metaphor best illuminates the way forward.

As for me, I’ll continue to help cultivate healthy Forests and prepare those who are ready for their Ocean voyage. After all, the tide is rising, and the future belongs to those who can navigate these new waters.

The Bazillion Things They’re Never Going to Teach You About Software Development

In my 50+ years in software development, I’ve come to realise that coding is merely the tip of the iceberg. When I first started this blog (2009-ish), I wanted to create a space where developers and their managers could explore the full breadth of challenges that make software development such a challenging endeavor. Looking back at the journey so far, I’m proud of how this blog has examined the many dimensions of software development – the many dimensions that extend far beyond simply writing and testing code.

Each post has been a stepping stone in understanding the intricate dance that is modern software development. I’ve explored how effective software development encompasses so much more than technical prowess. Each post touches on an aspect of software development that no one is ever going to teach you about.

Fo example, human factors have been a recurring theme. Team dynamics, communication challenges, and the psychological aspects of collaborative knowledge work all significantly impact our efforts. Managing expectations—both our own and those of all the other Folks That Matter™—requires skills that aren’t taught in computer science programs or company training courses. Nor even on the job training.

I’ve tackled the evolving landscape of development approaches and how choosing the right approach for your team and project can substantially affect needs met. From planning to deployment strategies, the routines surrounding code creation often determine success more than the code itself.

The business side of software development also presents its own set of challenges. Budget constraints, market pressures, and aligning folks’ needs with business objectives create tensions that developers face daily. Understanding the “why” behind features is as important as knowing how to implement them.

Blockers

But what truly prevents developers from becoming all they can be? Often, it’s the invisible barriers we don’t discuss enough. The narrow focus on technical skills at the expense of soft skills. The resistance to understanding business contexts (folks’ needs) that give our work meaning. The hesitation to step outside comfort zones to learn and apply new paradigms. The isolation that comes from working heads-down instead of building relationships across organisational boundaries. The fear of failure that prevents experimentation and growth. These limitations—both self-imposed and environmentally reinforced—are what truly hold back developers’ potential more than any technical challenge ever could.

This blog has grown beyond my initial vision thanks to your engagement. Each comment and message has helped shape our collective exploration into what makes software development such a challenging—and rewarding—field.

Because when you understand that coding is just one piece of the puzzle, you become not just a better developer, but a more effective contributor to the entire software development lifecycle.

What aspects of software development beyond coding and testing have you found most challenging? I’d love to hear your thoughts in the comments below.

Two Versions of No Testing

Introduction

Welcome to a writing experiment! Below you’ll find two versions of a post about software testing. Both present the same core argument but use different rhetorical styles. We’re curious about how these different approaches affect reader engagement and response.

What interests you more? Which style do you find more persuasive (even if you disagree with the content)? Which makes you think more deeply about the issues raised?

We invite you to share your thoughts in the comments. Consider:

  • Which version held your attention longer?
  • Which prompted you to think more critically about your own views?
  • Which would you be more likely to share with colleagues?
  • How did you respond, emotionally, to each version?

I’m st least as  interested in how the different writing styles affect your engagement with the ideas as your response to #NoTesting itself.

The Imperative Version

Stop Testing and Start Coding Properly

Acknowledge Your Incompetence

Let’s be real: if you need testing and testers, you’re just admitting you don’t know how to build stuff properly. Every test you write is a monument to your team’s own incompetence.

Face the False Comparisons

Think about it. When you’re really good at something, do you need someone checking your work? Does a master chef need someone tasting every dish before it goes out? Does a skilled surgeon need someone double-checking their sutures? No. They’ve mastered their craft.

Reject the Culture of Doubt

But in software development, we’ve normalised this culture of doubt. We’ve created entire roles – TDD, “QC Engineers,” “Test Automation Specialists” – dedicated to proving we can’t trust developers to do their jobs right. It’s institutionalised incompetence.

Stop Writing Tests for Tests

And don’t get me started on unit tests. Writing code to test your code? If you need to write tests to verify your code works, you don’t understand what you’re building well enough to build it correctly in the first place.

Throw Away Your Crutches

The truth is, testing is a crutch. It’s what mediocre developers, teams and businesses rely on because they can’t think through their solutions properly. Real developers understand their systems so thoroughly that they can anticipate and prevent defects from ever occurring in the first place.

Stop Wasting Time

Every hour spent writing tests is an hour admitting you’re not good enough at your job. Every tester hired is a living declaration that your company can’t be trusted to deliver quality work.

Recognise the Amateurs

Want to know if you’re dealing with amateurs? Look for their test coverage metrics. The higher the number, the more they’re compensating for their lack of real skill.

Accept the Truth

It’s time to call this out for what it is. Testing isn’t a “best practice” – it’s a best practice for those who haven’t mastered their craft. Competent people don’t need this safety net. They deliver working products because they know what they’re doing.

The Interrogative Version

Are You Still Testing? That’s Rather Embarrassing, Isn’t It?

Why Do You Need Testing at All?

As James Bach provocatively suggested in his controversial 2018 blog post “Testing is Dead, Long Live Development,” isn’t the entire testing paradigm built on a foundation of mistrust? Bach argued that “every test case written is an admission of design failure.” Mightn’t he have a point?

Do Real Professionals Need Verification?

Sarah Thompson’s infamous Medium article “The Testing Trap” (2021) posed a fascinating question: “Why have we created an entire industry around assuming failure?” Didn’t she demonstrate rather conclusively that organisations with extensive testing regimes actually shipped fewer features than their counterparts?

Have We Created a Culture of Distrust?

The “No Testing Manifesto,” published anonymously on DevRant in 2020, raised a compelling point: haven’t we simply created a self-fulfilling prophecy? When you expect developers to make mistakes, don’t they invariably live down to those expectations?

Why Test Code with More Code?

David Chen’s controversial LinkedIn post “Unit Tests: The Emperor’s New Clothes” (2022) presented fascinating data: didn’t he show that companies spending more than 20% of their development time on unit tests actually had higher post-release defect rates? Mightn’t this suggest we’re solving the wrong problem?

Are Tests Just Props for the Mediocre?

As noted in “The Death of QA” (DevOps Quarterly, 2023), haven’t the most innovative tech companies been quietly scaling back their testing departments? Wasn’t there a telling correlation between reduced testing overhead and increased innovation speed?

How Much Time Are You Wasting?

The “Zero Test Movement” gaining traction in certain Silicon Valley startups claims to have demonstrated a 40% increase in feature delivery speed after abandoning traditional testing practices. Mightn’t this suggest we’ve been approaching quality entirely wrong?

When Will You Face Reality?

Remember what Peter Miller wrote in his piece “Testing: The Great Lie We Tell Ourselves” (2022): “Every hour spent writing tests is an hour not spent improving your core product.” Isn’t that the uncomfortable truth we’re all avoiding?

Is Software Development Really a Kind of Collaborative Knowledge Work?

Software development has long occupied a unique space in the world of human endeavour. Whilst its outputs are tangible—functioning applications, systems, and digital tools—the process itself is largely invisible to outsiders. This raises an interesting question: Is software development truly a form of collaborative knowledge work (CKW), or are we perhaps mischaracterising its nature?

The Origins of Collaborative Knowledge Work

The term “collaborative knowledge work” has its roots in Peter Drucker’s pioneering analysis of the post-industrial economy. In his seminal 1959 work “Landmarks of Tomorrow,” Drucker introduced the term “knowledge worker” to describe professionals who work primarily with information and theoretical knowledge. This marked a crucial shift from traditional manual labour to what he saw as an emerging class of workers whose primary capital was knowledge rather than manual skills or physical resources.

Drucker’s insight wasn’t merely descriptive—it was predictive. He foresaw that the majority of work in developed economies would eventually center around the creation, manipulation, and application of knowledge. Throughout the 1960s and 1970s, he further developed this concept, arguing that knowledge had become the primary economic resource, displacing traditional factors of production like land, labour, and capital.

The collaborative aspect of knowledge work emerged as organisations began to grapple with the implications of Drucker’s observations. The increasing complexity of knowledge-based tasks meant that no single individual could possess all the necessary expertise. This led to the recognition that effective knowledge work required not just individual expertise, but the ability to combine and leverage diverse knowledge through collaboration.

By the 1990s, researchers and practitioners had begun explicitly examining the collaborative nature of knowledge work. The term “collaborative knowledge work” emerged from the intersection of:

  1. Drucker’s knowledge worker concept
  2. Research into computer-supported cooperative work (CSCW)
  3. Studies of organisational learning and knowledge management
  4. The rise of global, distributed teams enabled by technology

This evolution reflected a deeper understanding that knowledge work isn’t just individual cognitive labour—it’s inherently social and collaborative. Modern knowledge work involves complex networks of interaction, where value is created not just through individual expertise, but more through the synthesis and combination of multiple perspectives and knowledge domains.

The emergence of software development as a discipline coincided with this theoretical evolution. As software systems grew more complex and interconnected, the field naturally embodied many of the principles Drucker and subsequent researchers identified as characteristic of knowledge work. The highly collaborative nature of modern software development, with its emphasis on shared understanding, collective problem-solving, team work, and continuous learning, makes it a particularly illuminating example of collaborative knowledge work in practice.

Understanding Collaborative Knowledge Work

Before diving into software development specifically, let’s establish what we mean by collaborative knowledge work. CKW typically involves:

  1. Complex problem-solving requiring specialised expertise
  2. High levels of interdependence between team members
  3. The creation and sharing of new knowledge
  4. Continuous learning and adaptation
  5. Work that is primarily cognitive rather than physical

The Software Development Landscape

When we examine modern software development practices, the parallels to CKW become striking. Consider a typical development team working on a complex application:

Knowledge Creation and Sharing

Developers constantly create new knowledge through:

  • Architecture decisions that shape system design
  • Novel solutions to technical challenges
  • Documentation that captures insights and rationale
  • Code reviews that spread understanding across the team
  • Technical specifications that crystallise shared understanding

Collaborative Nature

The collaborative aspects are evident in:

  • Pair programming sessions
  • Team-wide architecture discussions
  • Cross-functional planning meetings
  • Shared ownership of code bases
  • Collective code review processes

Continuous Learning

Software development demands perpetual learning:

  • Keeping up with new technologies and frameworks
  • Understanding evolving security threats
  • Learning from production incidents
  • Adapting to changing user needs
  • Improving development processes

Beyond Simple Collaboration

What makes software development particularly interesting as CKW is its layered nature. Developers collaborate not just with their immediate teammates, but with:

  • Future maintainers through clear code and documentation
  • The broader developer community through open source contributions
  • Users through feature development and bug fixes
  • Past developers through code archaeology and maintenance
  • Tools and frameworks through API usage and integration

The Knowledge Dimension

The knowledge aspects of software development are profound:

  1. Tacit Knowledge: Much of a developer’s expertise cannot be easily documented—it’s built through experience and practice.
  2. Explicit Knowledge: Code, documentation, and artefacts represent crystallised knowledge that can be shared and built upon.
  3. Meta-Knowledge: Understanding how to structure, maintain, and evolve complex systems requires high-level thinking about knowledge itself.

Addressing Counter-Arguments

Some might argue that software development is more akin to craft work or engineering than knowledge work. However, this view misses several key points:

  1. Whilst there are craft aspects to coding, modern software development involves far more than just writing code.
  2. The complexity of software systems requires constant knowledge creation and sharing that goes beyond traditional engineering disciplines.
  3. The rapid pace of technological change means that the knowledge component of software development is constantly evolving.

Why It Matters – The Pitfalls of Category Errors

Misclassifying the nature of software development work can lead to significant organisational dysfunction. When companies treat software development as purely technical work or simple task execution, several problems emerge:

  1. Metrics Misalignment: Measuring software development through simplistic metrics like lines of code or number of tickets closed fundamentally misunderstands the knowledge-intensive nature of the work. This can lead to perverse incentives and poor quality outcomes.
  2. Resource Allocation Errors: Treating development as purely technical work often results in insufficient allocation of time and resources for crucial knowledge-building activities like architecture discussions, code reviews, and documentation.
  3. Communication Breakdown: Failing to recognise the collaborative knowledge aspects can lead to communication structures that hinder rather than enable effective knowledge sharing and creation.
  4. Talent Management Issues: When organisations view software development primarily as task execution, they often struggle with:
    • Retention of experienced developers who feel undervalued
    • Career progression paths that don’t acknowledge the knowledge dimension
    • Training programmes that focus too heavily on technical skills while neglecting collaborative and knowledge-sharing capabilities
  5. Process Misalignment: Implementing processes designed for routine production work can actively harm software development efforts by:
    • Fragmenting knowledge work into artificial task boundaries
    • Reducing opportunities for collaborative problem-solving
    • Creating unnecessary documentation overhead that doesn’t contribute to shared understanding
  6. Innovation Barriers: Treating software development as purely technical execution can stifle innovation by:
    • Limiting cross-pollination of ideas
    • Reducing experimentation opportunities
    • Constraining the organic evolution of solutions
  7. Quality Impact: When the knowledge work aspect is overlooked, quality often suffers through:
    • Reduced emphasis on building shared understanding
    • Limited investment in architectural knowledge
    • Insufficient attention to knowledge transfer and maintenance
  8. Management Monstrosities: Miscategorising software development as other than CKW means it will get managed inappropriately.

The consequences of these category errors can be severe and long-lasting, affecting not just the immediate software development process but the entire organisation’s ability to leverage technology effectively.

Conclusion

After careful analysis, it’s clear that software development isn’t just collaborative knowledge work—it’s perhaps one of the purest examples of CKW in the modern economy. The combination of:

  • Complex problem-solving
  • Team-based knowledge creation
  • Continuous learning requirements
  • High interdependence
  • Meta-knowledge management

Makes software development a quintessential form of collaborative knowledge work. Far from deluding ourselves, recognising software development as CKW helps us better understand its nature and potentially improve how we approach it.

This recognition has important implications for:

  • How we structure development teams
  • The tools and processes we use
  • How we measure productivity
  • How we train and develop software professionals
  • The way we manage software projects

Understanding software development as collaborative knowledge work isn’t just an academic exercise—it’s a crucial insight that can help us build better software, more effectively, with happier and more productive teams.