Archive

Coding

Wankered

Understanding and addressing developer exhaustion in the software industry

In software development, there’s a lot of talk about technical debt, scalability challenges, and code quality. But there’s another debt that’s rarely acknowledged: the human cost. When we are consistently pushed beyond our limits, when the pressure never lets up, when the complexity never stops growing—we become wankered. Completely and utterly exhausted.

This isn’t just about being tired after a long day. This is about the deep, bone-deep fatigue that comes from months or years of ridiculous practices, impossible deadlines, and the constant cognitive load of modern software development.

The Weight of Complexity

Mental Load Overflow

Modern software development isn’t just about writing code. We are system architects, database administrators, DevOps engineers, security specialists, team mates, user experience designers, and people—often all in the same day. The sheer cognitive overhead of keeping multiple complex systems in our minds simultaneously is exhausting.

Every API integration, every third-party service, every microservice adds to the mental model that we must maintain. Eventually, that mental model becomes too heavy to carry.

Context Switching Fatigue

Nothing burns us out faster than constant context switching. One moment we’re debugging a race condition in the payment service, the next we’re in a meeting about user interface changes, then we’re reviewing someone else’s pull request in a completely different part of the codebase.

Each switch requires mental energy to rebuild context, and that energy is finite. By the end of the day, we’re running on empty, struggling to focus on even simple tasks.

The Always-On Culture

Slack notifications at 9 PM. ‘Urgent’ emails on weekends. Production alerts that could technically wait until Monday but somehow never do. The boundary between work and life has dissolved, leaving us in a state of perpetual readiness that prevents true rest and recovery.

The Exhaustion Cycle

Sprint After Sprint

Agile development was supposed to make our work more sustainable, but too often it’s become an excuse for permanent emergency mode. Sprint planning becomes sprint cramming. Retrospectives identify problems that never get addressed because there’s always another sprint starting tomorrow.

The two-week rhythm that should provide structure instead becomes a hamster wheel, with each iteration bringing new pressure and new deadlines.

Technical Debt Burnout

Working with legacy systems day after day takes a psychological toll. When every simple change requires hours of archaeological work through undocumented code, when every bug fix introduces two new bugs, when the system fights back at every turn—the frustration compounds into exhaustion.

The Perfectionism Trap

Software development attracts people who care deeply about their craft. But in an environment where perfection is impossible and deadlines are non-negotiable, that conscientiousness becomes a burden. The gap between what we want to build and what we have time to build becomes a source of constant stress.

How Tired Brains Sabotage Productivity

The Neuroscience of Mental Fatigue

When we’re mentally exhausted, our brains don’t just feel tired—they actually function differently. The prefrontal cortex, responsible for executive functions like planning, decision-making, and working memory, becomes significantly impaired when we’re fatigued.

This isn’t a matter of willpower or motivation. Tired brains literally cannot process complex information as effectively. The neural pathways responsible for holding multiple concepts in working memory become less efficient. Pattern recognition—crucial for debugging and coding—deteriorates markedly.

Cognitive Load and Code Complexity

Software development requires managing enormous amounts of information simultaneously: variable states, function dependencies, user requirements, interpersonal relationships, system constraints, and potential edge cases. When our brains are operating at reduced capacity due to exhaustion, this cognitive juggling act becomes nearly impossible.

We make more logical errors when tired, miss obvious bugs, and struggle to see the bigger picture whilst handling implementation details. The intricate mental models required for complex software architecture simply cannot be maintained when our cognitive resources are depleted.

Decision Fatigue in Development

Every line of code involves decisions: variable names, function structure, error handling approaches, performance trade-offs. A fatigued brain defaults to the path of least resistance, often choosing quick fixes over robust solutions.

Research shows that as mental fatigue increases, decision quality decreases exponentially. This is why code written during crunch periods often requires extensive refactoring later—our tired brains simply couldn’t evaluate all the implications of each choice.

The Organisational Impact

Productivity Paradox

When we’re exhausted, we’re not just unhappy—we’re less effective. Decision fatigue leads to poor architectural choices. Mental exhaustion increases bugs and reduces code quality. The pressure to deliver faster often results in delivering slower, as technical shortcuts create more work down the line.

Knowledge Flight Risk

When experienced members of our teams burn out and leave, they take irreplaceable institutional knowledge with them. The cost of replacing a senior developer who knows our systems intimately is measured not just in recruitment and onboarding time, but in the months or years of context that walks out the door.

Innovation Drought

Exhausted teams don’t innovate. We survive. When all our mental energy goes towards keeping existing systems running, there’s nothing left for creative problem-solving, quality improvement, or advancing the way the work works.

Sustainable Practices

Realistic Planning

Account for the hidden work: debugging, documentation, code review, deployment issues. Stop treating best-case scenarios as project timelines.

Protect Deep Work

We need uninterrupted blocks of time to tackle complex problems. Open offices and constant communication tools are the enemy of thoughtful software development. Create spaces and times where deep work is possible. (And we’ll get precious little help with that from developers).

Embrace Incrementalism

Not everything needs to be perfect in version one. Not every feature needs to ship this quarter. Sometimes the most sustainable approach is to build well 80% of what’s wanted, rather than 100% of what’s wanted, poorly.

Technical Health Time

Just as athletes need recovery time, codebases need maintenance time. Build technical debt reduction into our planning. Make refactoring a first-class citizen alongside feature development.

Individual Strategies

Boundaries Are Not Optional

Learn to say no. Not to being helpful, not to solving problems, but to the assumption that every problem needs to be solved immediately by any one of us.

Energy Management

Recognise that mental energy is finite. Plan the most challenging work for when we’re mentally fresh. Use routine tasks as recovery time between periods of intense focus.

Continuous Learning vs. Learning Overwhelm

Stay curious, but be selective. We don’t need to learn every new framework or follow every technology trend. Choose learning opportunities that align with career goals and interests, not just industry hype.

Physical Foundation

Software development is intellectual work performed by physical beings. Sleep, exercise, and nutrition aren’t luxuries—they’re professional requirements. Our ability to think clearly depends on taking care of our bodies.

Recognising the Signs

Developer exhaustion doesn’t always look like dramatic burnout. Often it’s subtler:

  • Finding it harder to concentrate on complex problems
  • Feeling overwhelmed by tasks that used to be routine
  • Losing enthusiasm for learning new technologies
  • Increased irritability during code reviews or meetings
  • Physical symptoms: headaches, sleep problems, tension
  • Procrastinating on work that requires deep thinking
  • Feeling disconnected from the end users and purpose of our work

Moving Forward

The goal isn’t to eliminate tiredness from software development—complex cognitive work is inherently demanding. The goal is to make that work sustainable over the long term. (Good luck with that, BTW)

This means building organisations that value our wellbeing not as a nice-to-have, but as a prerequisite for building quality software. It means recognising that the most productive developer is often the one who knows when to stop working. Which in turn invites us to confer autonomy on developers.

Software development will always be challenging. The problems we solve are complex, the technologies evolve rapidly, and the stakes continue to rise. But that challenge can energise us, not exhaust us.

When we’re wankered—truly, deeply tired—we’re not serving our users, our teams, or ourselves well. The most sustainable thing we can do is acknowledge our limits and work within them.

Because the best code isn’t written by the developer who works the longest hours. It’s written by the developer who brings their full attention and energy to the problems that matter most.


If you’re feeling wankered, you’re not alone. This industry has a long way to go in creating sustainable working conditions, but change starts with honest conversations about what we’re experiencing.

Further Reading

Baumeister, R. F., & Tierney, J. (2011). Willpower: Rediscovering the greatest human strength. Penguin Books.

DeMarco, T., & Lister, T. (2013). Peopleware: Productive projects and teams (3rd ed.). Addison-Wesley Professional.

Fowler, M. (2019). Refactoring: Improving the design of existing code (2nd ed.). Addison-Wesley Professional.

Hunt, A., & Thomas, D. (2019). The pragmatic programmer: Your journey to mastery (20th anniversary ed.). Addison-Wesley Professional.

Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.

Maslach, C., & Leiter, M. P. (2016). The burnout challenge: Managing people’s relationships with their jobs. Harvard Business Review Press.

McConnell, S. (2006). Software estimation: Demystifying the black art. Microsoft Press.

Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63(2), 81-97.

Newport, C. (2016). Deep work: Rules for focused success in a distracted world. Grand Central Publishing.

Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), 257-285.

Winget, L. (2006). It’s called work for a reason: Your success is your own damn fault. Gotham Books.

A New Way of Looking at Software Development

From the transcript of Dr Casey Morgan’s controversial presentation at CodeCon 2025

The auditorium buzzed with anticipation as Dr Casey Morgan stepped up to the presentation platform. Around them, 500 of the world’s top developers had just finished the morning coffee break, many still discussing their current projects using their familiar AST toolchains—some clicking through visual node editors, others using drag-and-drop tree builders to show off recent work.

‘Thank you for joining me today,’ Casey began, gesturing to dismiss the tree structures that had been displaying on the main screen. ‘I’m here to propose something… unconventional. A fundamentally different way to think about code representation that I believe could offer some unique advantages.’

She paused, scanning the faces of developers who had grown up building programmes by directly assembling syntax trees—clicking to add nodes, dragging to restructure branches, using visual editors, commands and APIs.

‘I call it “textual programming”.’

A wave of puzzled murmurs rippled through the audience. In the front row, Marcus Chen, lead architect at Distributed Dynamics, frowned slightly.

The Unusual Proposal

Casey’s concept was unlike anything the programming community had encountered: instead of building programmes by manipulating AST structures through visual node editors and drag-and-drop interfaces, programmes could be represented as linear sequences of human-readable symbols.

‘Imagine,’ Casey said, projecting a strange sequence onto the main display:

function calculateFibonacci(n) {
    if (n <= 1) return n;
    return calculateFibonacci(n-1) + calculateFibonacci(n-2);
}

‘This linear representation would encode the same semantic meaning as our AST structures, but as a sequential stream of characters that developers would… type directly.’

The audience stared at the bizarre notation with growing amusement.

The Immediate Concerns

Sarah Kim, Senior AST Engineer at MindMeld Corp: ‘Dr Morgan, I’m struggling to understand the practical implementation. How would developers ensure structural integrity? When I use a visual node editor, I literally cannot create a malformed tree—the interface simply won’t allow invalid connections. But with this… character stream… what prevents someone from typing complete nonsense?’

Casey nodded. ‘That’s certainly a challenge. The system would need to constantly re-parse these character sequences and provide error feedback when the text doesn’t represent a valid tree structure.’

The audience shifted uncomfortably.

Marcus Chen: ‘Wait, you’re suggesting a system where the code could be in an invalid state? Where developers could accidentally break their programme just by typing the wrong character? That seems like a massive degradation from our current reliability.’

Casey: ‘I understand that sounds concerning, but consider this: what if the ability to work in temporarily invalid states actually enables more fluid thinking? Sometimes you need to break something before you can rebuild it better. Current tree editors force you to maintain validity at every step, which might constrain exploration. Interestingly, there were early experiments with syntax-directed programming environments in the 1980s that enforced similar structural constraints, and environments like Mjølner for the BETA language that provided more structure-aware development tools, but they never achieved the fluidity that our modern AST tools provide. Perhaps the pendulum swung too far towards structural rigidity, and text could offer a middle ground.’

Dr Wright: ‘But Casey, you’re mischaracterising our current tools. Modern AST editors do support invalid intermediate states—they make them visible and actionable in ways text never could. When I’m restructuring a complex tree, the IDE shows me exactly which nodes are problematic, suggests valid completions, and even maintains partial compilation contexts. I can experiment freely whilst getting real-time feedback about structural issues. Your text approach would lose all of that sophisticated error guidance and replace it with… what? Cryptic parser error messages? We’ve already solved the flexibility problem without sacrificing the safety and intelligence of our tools.’

Sarah Kim: ‘And think about what else you’d be throwing away! Our semantic-aware merge algorithms that automatically resolve conflicts at the meaning level, real-time type inference that shows you the implications of every change, automated dependency tracking, intelligent refactoring that understands program semantics—all of that would be impossible with linear character sequences. You’d be asking developers to manually track imports, manually resolve merge conflicts, and manually verify type safety. It’s like proposing we go back to manual memory management when we have garbage collection.’

Unknown developer: ‘Not to mention accessibility. Our structure-aware screen readers work beautifully with AST nodes, providing rich semantic information to visually impaired developers. Text files would force them back to listening to character-by-character descriptions of syntax symbols. And what about internationalisation? AST nodes work universally, but your text files would tie us to specific character encodings and syntactic conventions.’

Marcus: ‘The security implications alone are staggering. Text files could contain hidden Unicode characters, be corrupted by encoding issues, or have malicious content inserted between visible characters. Our AST verification systems prevent all of that. And the environmental cost—think about all the redundant parsing and recompilation. Text would waste enormous amounts of computing resources that our direct tree manipulation avoids entirely.’

Dr James Wright, Director of Innovation Institute: ‘I’m concerned about the cognitive overhead. When I’m building a complex algorithm, I can see the entire tree structure, drag nodes around, visualise the flow. How would anyone comprehend programme structure from a linear sequence of characters?’

Casey: ‘That’s where it gets interesting—linear text might reveal different patterns than tree visualisation. You might notice repetitive structures, common sequences, or algorithmic patterns that are harder to see when nodes are spatially distributed. The constraint of linearity could force a different kind of structural thinking.’

Casey remained calm, though the room’s scepticism was palpable. ‘The idea is that developers would develop familiarity with common textual patterns. They’d learn to “read” the structure from the character sequences.’

The Growing Scepticism

As the session continued, the questions became more pointed:

Rapid Prototyping: ‘Casey, you mentioned quick sketching, but I can prototype by dragging a few function nodes together and see exactly what I’m building as I construct it. Why would typing individual characters be faster than visual construction?’

Version Control: ‘Instead of tracking AST transformations, we could diff character sequences directly. Imagine seeing exactly which symbols changed between versions—a completely new form of change visualisation.’

Universal Accessibility: ‘Text could be manipulated with the most basic tools imaginable. No specialised AST tooling required—potentially opening programming to entirely new populations who never learned tree manipulation interfaces, command-line utilities, or visual node editors.’

Cognitive Revolution: ‘Linear representation might unlock different types of thinking. Whilst AST commands encourage procedural construction, text could promote holistic algorithmic visualisation—potentially revealing new problem-solving approaches.’

Sara Kumar, Independent Developer: ‘Casey, this is mind-blowing! But I’m struggling to visualise the workflow. How would developers navigate these linear sequences? Our current AST tooling is so diverse and powerful—whether through tree views, command pipelines, or node graphs—how would you achieve similar precision with text?’

Casey’s eyes lit up. ‘That’s where it gets really interesting. Navigation could be character-based, word-based, or even pattern-based. Imagine search systems that find textual patterns across codebases—no need for complex tree queries or specialised AST search interfaces!’

The Mounting Objections

The questions grew more challenging as the session progressed.

Dr Wright: ‘Let’s talk about collaboration. When my team works together, we can see each other manipulating the same tree in real-time, pointing to specific nodes, discussing structure visually. How would that work with linear text?’

Marcus: ‘And error prevention—our IDEs guide us through valid tree construction. They suggest appropriate node types, validate connections, prevent impossible structures. Text systems would need to replicate all of that functionality whilst being fundamentally less intuitive.’

Sarah Kim: ‘Plus, there’s the execution efficiency issue. When I modify a node in my tree, the running programme updates instantly—real-time incremental compilation means our executables are always synchronised with the current AST state. With text, you’d need to reparse and recompile every time you make a change. That seems incredibly inefficient.’

Dr Wright: ‘And consider something as basic as cut and paste. When I copy an AST fragment, I’m copying a complete, semantically valid tree structure with all its type information and metadata. The IDE ensures I can only paste it in locations where it makes sense. With text, you’d be copying… character sequences? With no understanding of structure or validity? You could accidentally paste a function definition in the middle of an expression.’

Unknown developer: ‘But Casey, consider the sheer inefficiency. When I create that fibonacci function, I click “add function node”, type “calculateFibonacci”, set the parameter “n”, drag in a conditional, and set the values. With your text system, developers would have to manually type “function”, all the braces, “if”, “return”, parentheses, semicolons—why type all that structural syntax when the interface can handle it automatically?’

Casey: ‘Well, the redundancy does seem excessive when you put it that way…’

Sarah Kim: ‘The debugging implications are staggering. When something goes wrong, I can visually trace through my tree, see the data flow, identify problem nodes. You’re proposing we debug by… reading character sequences?’

Unknown voice from the back: ‘Dr Morgan, this feels like proposing assembly language when we have high-level visual programming tools. What’s the actual benefit?’

The Fundamental Questions

As the hour progressed, the audience’s concerns crystallised around core issues:

Safety: Text-based programming would introduce countless opportunities for errors that were literally impossible with guided tree construction.

Productivity: Every task that was currently visual and intuitive would become abstract and error-prone.

Learning Curve: New developers would need to memorise syntax rules instead of learning through visual exploration.

Tool Complexity: Text editors would need to recreate all the intelligence of current AST tools whilst being fundamentally less capable.

Maintenance: Reading and understanding existing code would become dramatically more difficult without visual tree representation.

Sara Kumar, Independent Developer: ‘Casey, I have to ask—have you actually tried building a complex system this way? It sounds like you’d spend more time debugging syntax errors than solving actual problems.’

Casey smiled weakly. ‘The learning curve would certainly be steep initially.’

The Uncomfortable Reality

Towards the end of the session, the questions became more direct.

Dr Maria Santos, Education Director at Code Academy: ‘Casey, we teach programming through visual tree building because it’s intuitive—students can see programme structure immediately. You’re proposing we replace that with… memorising character sequences? How would that possibly be better for learning?’

Casey: ‘I wonder if visual-first education might actually be limiting in some ways. When students start with trees, they think in terms of discrete components. Linear text might encourage them to think about flow, narrative, the sequential logic of computation. Different mental models could lead to different insights.’

Several audience members shook their heads in disbelief.

The Uncomfortable Questions

Towards the end of the session, the questions became more philosophical.

Marcus: ‘Casey, I need to understand your thesis here. You’ve shown us a system that would make programming more error-prone, harder to visualise, more difficult to debug, and require extensive memorisation of arbitrary syntax rules. You’re asking us to give up immediate visual feedback for… what exactly?’

Sarah Kim: ‘Every advantage you’ve mentioned—rapid prototyping, version control, collaboration—we already have superior solutions for. Our visual systems are faster, safer, and more intuitive. I genuinely don’t understand the appeal of this approach.’

Dr Wright: ‘And the security implications worry me. Our current tree validators ensure code integrity. Text files could be easily corrupted or maliciously modified. How would text-based systems prevent tampering?’

Unknown developer: ‘Dr Morgan, with respect, this sounds like a needlessly complex solution to problems we’ve already solved. Why would anyone choose to make programming harder?’

The Final Challenge

As the session neared its end, Marcus Chen stood up with a bemused expression.

‘Casey, I want to understand something. You’ve proposed replacing our visual, guided, error-preventing development environment with a system based on memorising syntax rules and typing linear character sequences. A system where malformed programmes are possible, where structure is invisible, where collaboration becomes awkward text sharing.

‘I’m trying to find the upside here, but every supposed benefit seems to be something we already do better with visual tree manipulation. The downsides, however, are enormous: syntax errors, reduced productivity, harder debugging, steeper learning curves, and cognitive overhead.

‘So my question is simple: other than academic curiosity, why would any rational developer choose this approach?’

Casey looked out at the audience—hundreds of developers who could shape logic with drag-and-drop simplicity, who collaborated through shared visual workspaces, who had never known the frustration of syntax errors or the cognitive load of maintaining mental models of invisible structure.

‘Perhaps,’ they said quietly, ‘there are insights that only come from constraint. Maybe working with a more limited medium forces different kinds of thinking. Or maybe…’

They paused, seeing the politely sceptical faces.

‘Maybe you’re right. Maybe this is just an interesting academic curiosity with no practical value.’

Epilogue

Dr Morgan’s presentation ended to scant and unconvinced murmurs. Whilst their research into ‘textual programming’ generated some academic discussion amongst theoretical computer scientists, the broader development community found the proposal risible.

A few independent researchers built experimental text editors and basic parsers, mostly to satisfy their curiosity about this unusual approach. Most found the experience frustrating and unproductive—exactly as the CodeCon audience had predicted.

The general consensus was that Dr Morgan had demonstrated an interesting thought experiment about alternative representations, but nothing that could compete with the efficiency, safety, and intuitiveness of direct tree manipulation.

Whether textual programming represented a misguided approach or simply an academic exercise remained unclear. What was certain was that the development community saw no compelling reason to abandon their sophisticated, visual, error-preventing tools for the apparent chaos of linear character sequences.

The revolution, it seemed, would have to wait for more compelling advantages.


Dr Casey Morgan continues their research into alternative programming paradigms at the Institute for Computational Archaeology. Their upcoming paper, ‘Linear Text as Code Representation: A Feasibility Study’, is expected to conclude that whilst technically possible, textual programming offers no significant advantages over current tree-based development methodologies.

Further Reading

Baxter, I. D., Yahin, A., Moura, L., Sant’Anna, M., & Bier, L. (1998). Clone detection using abstract syntax trees. In Proceedings of the International Conference on Software Maintenance (pp. 368-377). IEEE.

Fluri, B., Würsch, M., PInzger, M., & Gall, H. C. (2007). Change distilling: Tree differencing for fine-grained source code change extraction. IEEE Transactions on Software Engineering, 33(11), 725-743. https://doi.org/10.1109/TSE.2007.70731

Kay, A. (1993). The early history of Smalltalk. ACM SIGPLAN Notices, 28(3), 69-95. https://doi.org/10.1145/155360.155364

Klint, P., van der Storm, T., & Vinju, J. (2009). RASCAL: A domain specific language for source code analysis and manipulation. In Proceedings of the 9th IEEE International Working Conference on Source Code Analysis and Manipulation (pp. 168-177). IEEE.

Madsen, O. L., Møller-Pedersen, B., & Nygaard, K. (1993). Object-oriented programming in the BETA programming language. Addison-Wesley.

Teitelbaum, T., & Reps, T. (1981). The Cornell program synthesizer: A syntax-directed programming environment. Communications of the ACM, 24(9), 563-573. https://doi.org/10.1145/358746.358755

From Dawn Till Dusk

Reflections on a 50+ Year Career in Software

The Dawn: Programming as Pioneering (1970s)

When I first touched a computer in the early 1970s, programming wasn’t just a job—it was exploration of uncharted territory. We worked with punch cards and paper tape, carefully checking our code before submitting it for processing. A single run might take hours or even overnight, and a misplaced character meant starting over. Storage was 5MByte disk packs, magnetic tapes, more punch cards, and VRC (visible record cards with magnetic stripes on the reverse).

The machines were massive, expensive, and rare. Those of us who could communicate with these behemoths were viewed almost as wizards, speaking arcane languages like FORTRAN, COBOL, Assembler, and early versions of BASIC. Computing time was precious, and we spent more time planning our code on paper than actually typing it.

The tools were primitive by today’s standards, but there was something magical about being among the first generation to speak directly to machines. We were creating something entirely new—teaching inanimate objects to think, in a way. Every problem solved felt like a genuine discovery, every program a small miracle.

The Dusk: The AI Inflection Point (2020s)

In recent years, I’ve witnessed a most profound shift. Machine learning and AI tools have begun to automate aspects of programming we once thought required human creativity and problem-solving. Large language models can generate functional code from natural language descriptions, debug existing code, and explain complex systems.

The pace of change has been breathtaking. Just five years ago, we laughed at the limitations of code-generation tools. Remember Ambase? Or The Last One? Today, junior programmers routinely complete in minutes what would have taken days of specialised knowledge previously.

As I look forward, I can’t help but wonder if we’re witnessing the twilight of programming as we’ve known it. The abstraction level continues to rise—from machine code to assembly to high-level languages to frameworks to AI assistants to …? Each step removed programmers further from the machine while making software creation more accessible.

The traditional career path seems to be narrowing. Entry-level programming tasks are increasingly automated, while senior roles require deeper system design and architectural thinking. And, God forbid, people skills. The middle is being hollowed out.

Yet I remain optimistic. Throughout my career, development has constantly reinvented itself. What we call “programming” today bears little resemblance to what I did in the 1970s. The fundamental skill—translating human needs into machine instructions—remains valuable, even as the mechanisms evolve.

If I could share advice with those entering the field today: focus on attending to folks’ needs, not on coding, analysis, design; seek out change rather than just coping passively with it; understand systems holistically; develop deep people knowledge; and remember that technology serves humans, not the other way around.

Whatever comes next, I’m grateful to have witnessed this extraordinary journey—from room-sized computers with kilobytes of memory to AI systems that can code alongside us and for us. It’s been a wild ride participating in one of humanity’s most transformative revolutions.

From Punch Cards to Interactive Sessions: A Programmer’s Tale – Part 1

In the world of modern software development, with its sleek IDEs, instant feedback, and powerful debugging tools, it’s easy to forget how far we’ve come. As I sit here, surrounded by the fruits of decades of technological progress, my mind often wanders back to the early days of my programming career. It was a time of punch cards, paper tape, batch processing, and mainframes that filled entire rooms – a far cry from the pocket-sized supercomputers we carry around today.

For those of you who came of age in the era of personal computers and the internet, the landscape I’m about to describe might seem almost alien. Yet, it was in this environment that many of us oldies cut our teeth as programmers, developing skills and mindsets that would serve us well throughout our careers.

Join me on a journey back to the 1970s, where we’ll explore the day-to-day realities of programming in an age when computer time was a precious commodity and patience was not just a virtue, but a professional necessity. From the meticulous world of COBOL and punch cards to my clandestine adventures with BASIC on an ICL mainframe, and then a Commodore Pet and other microcomputers, this is a tale of perseverance, innovation, and the sheer joy of making these early electronic brains bend to our will.

So, grab a cup of tea, settle in, and let’s take a stroll down memory lane – a lane paved with punch cards, printer paper, and the unmistakable hum of mainframe computers and mahoosive air condition units.(also, Halon fire suppresion systems).

The Punch Card Era: COBOL and Batch Processing

The Rhythm of Batch Submissions

My programming odyssey began in the world of punch cards and COBOL. Each program was a stack of cards, meticulously punched and carefully handled. A single dropped deck could mean hours of re-sorting.

The Long Wait: Three to Four Day Turnaround

Patience wasn’t just a virtue; it was a requirement. After submitting our COBOL programs, we’d wait three to four days for results. This glacial pace meant each edit cycle was crucial. A single misplaced comma could cost you the better part of a week.

Debugging: A Test of Memory and Foresight

With such long turnaround times, debugging was an exercise in foresight. We couldn’t rely on quick iterations. Instead, we had to anticipate potential issues and include extensive error checking and output statements in every submission.

The Art of Desk Checking

To avoid wasting precious submission cycles, we became experts at desk checking – meticulously going through our code line by line, playing computer in our heads. It was tedious but necessary given the high cost of machine time and slow turnaround.

Sneaking into QMC: Learning BASIC on the ICL Mainframe

A Taste of Interactive Computing

My introduction to BASIC came when I started sneaking into Queen Mary College to use their ICL mainframe. It was a revelation – interactive computing that responded in seconds rather than days.

The Joy of Immediate Feedback

After the long waits of batch COBOL, BASIC felt miraculous. Type a line, run it, and see the results immediately. This rapid feedback loop transformed how I thought about programming and problem-solving.

Exploring in Real-Time

BASIC on the ICL mainframe opened up new possibilities for experimentation. I could try out ideas on the fly, tweaking and adjusting my code in real-time. It was like moving from writing letters to having a conversation.

The Thrill of Unauthorized Access

There was an added excitement to these BASIC sessions – the thrill of unofficial access. Each visit to QMC felt like a covert operation, driven by an insatiable curiosity to learn and experiment with this more responsive form of computing.

Reflections on the Transition

From Methodical Planning to Rapid Iteration

The shift from batch COBOL to interactive BASIC wasn’t just a change in languages; it was a fundamental shift in approach. COBOL required meticulous planning and foresight, while BASIC allowed for a more iterative, experimental style of coding.

The Evolution of Problem-Solving

This transition changed how I approached problems. With COBOL, every problem needed to be thoroughly understood and mapped out before coding began. BASIC allowed for a far more exploratory approach, where solutions could evolve through trial and error – what my late friend P Grant Rule used to call a “random walk”.

Appreciating Both Worlds

While the immediacy of BASIC was intoxicating, the discipline learned from COBOL batch processing was invaluable. It taught me the importance of careful planning and the value of writing clean, well-documented code from the start.

Other Early Languages

The ICL mainframe also provided SNOBOL, ALGOL 60 and 68, FORTRAN and others. So of course I had to dabble in those, too.

The Lasting Impact

This early experience with both batch and interactive programming laid a foundation that would serve me well in the years to come. It fostered an appreciation for both careful, structured approaches and rapid, iterative development – skills that remain relevant across all programming paradigms.

The contrast between the regimented world of punch cards and COBOL and the more dynamic environment of BASIC on the ICL mainframe encapsulates a pivotal moment in computing history. It was a time of transition from computing as a scarce, carefully rationed resource to a more accessible, interactive tool.

For those who’ve only known the immediate feedback of modern development environments, it might be hard to imagine the patience and precision required in those early days. Yet, these experiences shaped a generation of programmers, instilling a deep understanding of both the constraints and the potential of computing systems.

As we’ve progressed through various languages and paradigms since then, from assembler to high-level languages, from procedural to object-oriented programming, these early experiences continue to inform our approach to problem-solving and system design.

Next Up

If you’ve found this post interesting let me know and I’ll be happy to continue into PDP-11 Macro Assembler on a variety of PDP11s under RT11, RSTS-E and RSX, Wang Basic, VAX VMS Assembler, Pascal, Modula-2, Transputers, Forth, Smalltalk-80, and other languages of yesteryear. I’m keen to reminisce about the daily work of programmers back then, too.

Fuggedabaht Training: The Future of Learning in Tech

In the realm of tech education and learning, few statements are as provocative and thought-provoking as Oscar Wilde’s assertion:

“Education is an admirable thing, but it is well to remember from time to time that nothing that is worth knowing can be taught.”

This paradoxical wisdom serves as the perfect launching point for an exploration of learning in the tech industry. In a world where formal training has long been the go-to method for skill development, Wilde’s words challenge us to reconsider our approach fundamentally.

In the dizzying world of technology, where today’s innovation is tomorrow’s legacy system, how do we truly learn? The tech industry has long relied on training as its educational backbone, but is this approach ever fit for purpose? Let’s embark on a journey to unravel this question and explore the future of learning in tech.

The Training Trap: Why It’s Not Enough

Picture this: You’ve just completed an intensive week-long course on the latest programming language. You’re buzzing with newfound knowledge, ready to conquer the coding world. Fast forward three weeks, and you’re staring at your screen, struggling to remember the basics. Sound familiar?

This scenario illustrates what Richard Feynman, the renowned physicist, meant when he said:

“I learned very early the difference between knowing the name of something and knowing something.”

Training often gives us the illusion of learning. We walk away with certificates and buzzwords, but when it comes to actually applying this knowledge, we find ourselves fumbling in the dark.

The Forgetting Curve: Our Brain’s Sneaky Saboteur

Enter Hermann Ebbinghaus and his infamous “forgetting curve”. This isn’t just some dusty psychological theory; it’s a real phenomenon that haunts every training session and workshop.

Image

As the curve shows, without active recall and application, we forget about 70% of what we’ve learned within a day, and up to 75% within a week. In the context of tech training, this means that expensive, time-consuming courses might be yielding diminishing returns faster than you can say “artificial intelligence”.

Real World vs. Training Room: A Tale of Two Realities

Training environments are like swimming pools with no deep end. They’re safe, controlled, and utterly unlike the ocean of real-world tech problems. This disparity leaves many students floundering when they face their first real challenge.

Moreover, in an industry where change is the only constant, static training curricula are often outdated before they’re even implemented. As Alvin Toffler presciently noted:

“The illiterate of the 21st century will not be those who cannot read and write, but those who cannot learn, unlearn, and relearn.”

The Bootcamp Boom: A Silver Bullet or Fool’s Gold?

In recent years, coding bootcamps have exploded onto the tech education scene, promising to transform novices into job-ready developers in a matter of weeks or months. But do they truly bridge the gap between traditional training and real-world demands?

The Promise of Bootcamps

Bootcamps offer an intensive, immersive learning experience that focuses on practical skills. They aim to provide:

  1. Rapid skill acquisition
  2. Project-based learning
  3. Industry-aligned curriculum
  4. Career support and networking opportunities

For many career changers and aspiring developers, bootcamps represent a tantalizing shortcut into the tech industry.

The Reality Check

While bootcamps have undoubtedly helped many individuals launch tech careers, they’re not without their criticisms:

  1. Skill Depth: The accelerated pace often means sacrificing depth for breadth. As one bootcamp graduate put it:

    “I learned to code, but I didn’t learn to think like a developer.” [and what does it even mean to “think like a developer, anyway?]

  2. Market Saturation: The proliferation of bootcamps has led to a flood of entry-level developers, making the job market increasingly competitive.
  3. Varying Quality: Not all bootcamps are created equal. The lack of standardisation means quality can vary wildly between programs.
  4. The Long-Term Question: While bootcamps may help you land your first job, their long-term impact on career progression is still unclear.

Bootcamps: A Part of the Solution, Not the Whole Answer

Bootcamps represent an interesting hybrid between traditional training and more innovative learning approaches. At their best, they incorporate elements of experiential learning and peer collaboration. However, they still operate within a structured, time-bound format that may not suit everyone’s approach to learning or career goals.

As tech leader David Yang notes:

“Bootcamps can kickstart your journey, but true mastery in tech requires a lifetime of learning.”

In the end, we might choose to view bootcamps as one possible tool in a larger learning toolkit, rather than a one-size-fits-all solution to tech education.

Reimagining Learning: The Tech Education Revolution

So, if traditional training isn’t the answer, what is? Let’s explore some alternatives that are already showing promise:

  1. Experiential Learning: Remember building your first website or debugging your first major error? That’s experiential learning in action. As Confucius wisely said, “I hear and I forget. I see and I remember. I do and I understand.”
  2. Continuous Learning Culture: Imagine a workplace where learning is as natural as breathing. Google’s no defunct “20% time” policy, which allowed employees to spend one day a week on side projects, was a prime example of this philosophy in action.
  3. Peer-to-Peer Knowledge Sharing: Some of the best learning happens organically, through conversations with colleagues. Platforms like Stack Overflow have harnessed this power on a global scale.
  4. Curiosity-Driven Exploration: What if we treated curiosity as a key performance indicator? Companies like 3M, which encourages employees to spend 15% of their time on self-directed projects, are leading the way.

Caution: Whilst experiential learning has its merits, it fails abjectly to counter groupthink, learning of the wrong things, and relatively ineffective shared assumptions and beliefs. Other approaches e.g. Organisational Psychotherapy can address the latter.

The Path Forward: Embracing the Learning Revolution

As we stand at the crossroads of traditional training and innovative learning approaches, it’s clear that a paradigm shift is not just beneficial—it’s essential. The future of learning in tech isn’t about more training; it’s about creating environments that foster continuous, experiential, and collaborative learning, whilst simultaneouly growing the ability to think critically, think of wider systems (systems thinking) and constantly surface and reflect together on shared assumptions and beliefs.

So, the next time you’re planning a training session, pause and ask yourself: Is this the best way to foster real learning? What about more engaging, effective approaches we could take?

In the words of William Butler Yeats, “Education is not the filling of a pail, but the lighting of a fire.” Isn’t it time we stopped trying to fill pails and started lighting fires in the tech industry.

What are your thoughts? How well has training served your needs, and how has your learning journey in tech evolved beyond traditional training? Please share your experiences in the comments below!

Making Tomorrow’s Big Balls of Mud Today

What is a Big Ball of Mud?

In software development, the term “Big Ball of Mud” refers to a system or codebase that has become so tangled, convoluted, and disorganised over time that it becomes increasingly difficult to maintain, modify, or understand. It’s a metaphor for a software product development that started with good intentions but gradually deteriorated into an unstructured mess due to a lack of proper planning, design, and adherence to best practices.

Consequences

The consequences of a Big Ball of Mud can be severe. It hinders productivity, increases technical debt, screws with predictability and schedules, and makes it challenging to introduce new features or fix bugs. Developers often find themselves spending more time trying to understand the existing code than actually writing new code. This can lead to frustration, decreased morale, and a higher risk of introducing further issues.

The Rise of AI-Centric Coding

And a paradigm shift is looming on the horizon – a transition towards AI writing code – and primarily for artificial intelligence (AI) readability and maintainability. While human-readable code has long been the desirable approach, the remarkable advancements in AI technology necessitate a reevaluation of our coding practices and the use of Ai to write code to harness the full potential of these sophisticated tools.

As AI systems become increasingly integrated into software development workflows, the need for code that caters to AIs’ unique strengths becomes paramount. This shift will give rise to coding styles specifically tailored for AI readability and maintainability, encompassing the following characteristics:

Abstraction and Modularisation Paramount

AI systems thrive on highly modularised and abstracted code, where individual components are clearly separated and encapsulated. This coding style will emphasise smaller, self-contained units of code with well-defined interfaces, promoting better organisation and encapsulation, aligning with the strengths of AI systems.

Formalised and Explicit Syntax

In contrast to the conventions and implicit understandings often relied upon by human programmers, AI systems will benefit from a more formalised and explicit syntax. This could involve additional annotations or metadata that make the semantics of the code unambiguous and readily interpretable by AI systems.

Pattern Recognition Optimisation

AI systems excel at recognising patterns, and the coding style will be optimised for this strength. Consistent naming conventions, structural similarities, and other patterns that can be easily recognised by AI systems will become more prevalent, enabling efficient pattern recognition and analysis.

Reduced Redundancy (DRY)

AI systems are better equipped to handle and maintain code with minimal redundancy, leading to a coding style that emphasises code reuse, shared libraries, and other techniques to reduce duplication. This approach will not only cater to AI systems’ strengths but also promote code maintainability and efficiency.

AI-Tailored Documentation

Traditional human-readable documentation and comments may become obsolete in an AI-centric coding paradigm. Instead, the emphasis will shift towards creating self-documenting code that can be seamlessly interpreted and maintained by AI systems. This could involve incorporating structured annotations, metadata, and other machine-readable elements directly into the codebase.

The documentation process itself could be automated, with AI algorithms capable of parsing the code structure, analysing the annotations, and generating comprehensive documentation tailored specifically for AI comprehension. This documentation would be optimised for pattern recognition, logical inference, and other capabilities that AI systems excel at, ensuring that it remains up-to-date and consistent with the evolving codebase.

AI-Generated Code for Machine Consumption

Furthermore, the advancement of AI technology raises the intriguing possibility of AI systems themselves generating code in a style optimised for machine consumption, rather than human readability. This AI-generated code could forgo traditional conventions and practices aimed at enhancing readability for human developers, instead favouring structures and patterns that are more readily interpretable and maintainable by AI systems themselves.

Such AI-generated code might be highly compact, with minimal redundancy and a heavy reliance on abstraction and modularisation. It could incorporate complex mathematical models, advanced algorithms, and unconventional coding techniques that leverage the strengths of AI systems while potentially sacrificing human comprehensibility.

As AI systems become increasingly integrated into the software development lifecycle, they could potentially maintain and evolve this AI-generated code autonomously, with minimal human intervention. This paradigm shift could lead to a scenario where the primary consumers and maintainers of code are AI systems themselves, rather than human developers.

Factors Contributing to Big Balls of Mud

While embracing AI-centric coding practices offers numerous advantages, we might choose to be mindful of the potential pitfalls that could lead to the creation of ‘big balls of mud’ – tangled, convoluted, and disorganised AI-generated codebases that become increasingly difficult to maintain and modify.

Today’s Factors

In the current software development landscape, where human readability and maintainability are still the primary focus, several factors contribute to the formation of big balls of mud:

  1. Lack of Architectural Foresight: The absence of a well-defined software architecture from the outset can quickly lead to a patchwork of disparate components, hindering maintainability and coherence.
  2. Prioritising Speed over Quality: The pursuit of rapid development and tight deadlines may result in sacrificing code quality, maintainability, and adherence to best practices, accumulating technical debt over time.
  3. Siloed Development Teams: Lack of coordination and communication between teams working on the same codebase can lead to inconsistencies, duplicated efforts, and a lack of cohesion.
  4. Lack of Documentation and Knowledge Sharing: Inadequate documentation and poor knowledge-sharing practices can make it challenging for new team members to understand and maintain the codebase, exacerbating the tangled nature over time.

Future Factors with AI-Driven Development

As we transition towards AI-driven software development, new factors may contribute to the metastasizing of big balls of mud, if not appropriately addressed:

  1. Not instructing AI to include AI-friendly code generation and the needs of AI vis codebase readability and maintainability. Prompt engineeres in the code generation space take note!
  2. Lack of AI Training and Optimisation: Without proper training and optimisation of AI models for code generation and maintenance, the resulting codebase may lack coherence, structure, and adherence to best practices.
  3. Inadequate Human Oversight and Understanding: An over-reliance on AI without sufficient human oversight and understanding can lead to opaque, difficult-to-maintain code that deviates from architectural principles and design patterns.
  4. Inconsistent AI Models and Tooling: Using multiple AI models and tools for code generation and maintenance without proper integration and consistency can lead to fragmented and incompatible code snippets, exacerbating the tangled nature of the codebase.
  5. Prioritising Speed over Quality and Maintainability: Even with AI-assisted development, the pursuit of rapid development and meeting tight deadlines at the expense of code quality, maintainability, and adherence to best practices can lead to long-term technical debt.
  6. Lack of Documentation and Knowledge Sharing: Inadequate documentation and poor knowledge-sharing practices can hinder the effective use and maintenance of AI-generated code, making it challenging to understand the context, design decisions, and rationale behind the code.

By addressing these factors proactively, software development teams and organisations can harness the power of AI while mitigating the risk of creating tomorrow’s big balls of mud, ensuring that codebases remain maintainable, scalable, and aligned with inhouse best practices.

Conclusion

The future of coding lies in embracing the capabilities of AI systems and adapting our practices to harness their full potential. By prioritising AI readability and maintainability, we can unlock new avenues for efficient and optimised code generation, enhanced collaboration between human developers and AI systems, and ultimately, more robust and scalable software solutions.

While this transition challenges traditional assumptions and beliefs and invites a major paradigm shift, it is an exciting prospect that will revolutionise the software development industry. As we navigate this paradigm shift, it is essential to strike a balance between leveraging the strengths of AI systems and maintaining a level of human oversight and understanding, ensuring that our code remains accessible, maintainable, and aligned with the evolving needs of the host business.

 

Who Cares How We Code?

The Premise

As developers, we’re a smart bunch. We know our stuff and can generally be trusted to choose the best approach to getting the job done, right? After all, the goal is to get that program up and running in production as quickly as possible. What could possibly go wrong if we cut a few corners here and there? A bit of spaghetti code never hurt anyone. And technical debt is a conscious and intentional choice, yes?

The Bitter Truth

Sadly, this cavalier attitude towards development practices is a recipe for disaster further down the line. While it may seem like a shortcut to production Heaven, it’s more akin to paving the way to Maintenance Hell – and Future Costs City. Let’s explore why we might choose to actually care about how we code.

Compromising Schedule Predictability

Messy codebases compromise team’s ability to predict how long something is going to take. The more the mess, the more unreliable the schedule.

The Future Payback Trap

We can compare writing sloppy, unmaintainable code as analogous to racking up maxed-out credit cards. It’s taking on inevitable future payback down the line, and just like financial debt, it accrues “interest” in the form of extra development costs that compound over time. That once-scrappy codebase becomes an ungovernable mess that’s exponentially harder to change, optimise, or extend. Before we know it, we’re spending more time untangling our own spaghettic nightmares than making meaningful progress.

The Collaboration Conundrum

In most cases, a codebase is a team effort across its lifetime. If you don’t maintain a minimum level of quality, good luck onboarding new team members or even having your future self make sense of the tangle a few months down the road. Sloppy code breeds knowledge silos and cripples effective collaboration.

The Debugging Debacle

Well-structured, self-documenting code that follows good architectural principles makes it infinitely easier to debug issues and safely update the software over time. In contrast, a patched-together “codic dervish” is virtually impossible to decipher or modify without potentially disastrous unintended consequences.

The Performance Pitfall

While your hacky script may seem to work for that small prototype or MVP, codebases that cut corners on fundamental coding practices and design patterns simply won’t be able to scale gracefully as usage and complexity grow over time. Code quality is paramount for managing performance under load.

The Futility of Quality Assurance

When we don’t make code quality a priority from the get-go, good luck getting meaningful code reviews or implementing a robust quality assurance approach. Code reviews become an exercise in futility, and QA turns into a fruitless game of DevOps whack-a-mole, constantly putting out fires in an inherently unstable, unpredictable product.

The Craftsmanship Principle

At the end of the day, consistently writing clean, maintainable code is one of the hallmarks of competence, as opposed to a mere hack. By treating our craft with care and prioritising technical excellence, we’re investing in the long-term success of our products, our teams, and our careers. But who cares about the long term?

The Creative Developer: Coding is Just Our Medium

How many software developers when asked what they do for a living reply “writing software”? Just about 100%, I’d guess. The very title of “software developer” implies we spend our days pounding out code, line after line of instructions for computers.

But is that truly an accurate picture? I would argue that the analogy of “writing” software promotes some problematic assumptions. It focuses purely on the technical aspect of coding, ignoring all the other important facets of bringing software to life. It perpetuates stereotypes of programmers as nerdy code monkeys, heads down in front of a keyboard all day. And it fails to capture the deeply creative process that software development entails at its best.

In reality, we developers don’t just “write” software – we attend to folks’ needs, crafting systems, experiences, solutions and above all, interpersonal connections. We collaborate, gather requirements, make trade-off decisions. We envision how people will interact with the products we craft. Code is simply our medium for bringing strategy and creativity to life.

Software development has as much in common with engineering, architecture or even storytelling as it does with coding. There is an artistry and imagination behind truly great tech-based products that goes far beyond syntax. The attendants of the future will be at least as fluent in humanities as mathematics or computer science.

So the next time someone asks what you do, don’t reflexively say you “write” software. Share how you attend to users’ needs, strategise solutions, and creatively work with teammates. Let’s put to rest the tired stereotype that developers are code-writing scribes! What we do entails far more multi-dimensional and meaningful attending to needs, products and people.

Girls Who Don’t Code

Girls and women are ideally placed to become real developers (by my definition*) and yet they want to CODE?

*My definition:

A real solutions developer is not so much someone who possesses technical expertise, but rather has the ability to connect with people and truly understand their needs.This requires a high level of emotional intelligence and empathy, as well as excellent communication and interpersonal skills. A real solutions developer builds relationships with clients, collaborates with team members, and creates solutions that meet the unique needs of each individual and group. By putting people first and prioritising human connections, a real solutions developer is able to deliver truly transformative solutions that make a difference in people’s lives.

See also: #NoSoftware

You’ve Got It All Backwards About Coding

Coding (as in programming) is, essentially, a form of structured note-taking. While it is true that computer programming enables machines to execute complex tasks, we may choose to recognise that it also serves as a powerful tool for humans to express their thoughts and ideas in a systematic manner. By employing a well-defined syntax and set of rules (albeit somewhat arcane), programming languages facilitate the clear and concise recording and communication of ideas, making it easier for individuals to plan, reason, and comprehend.

The act of coding allows programmers to break down complex human needs into smaller, manageable components. This structured approach not only makes it easier to understand and solve problems but also aids in the sharing of knowledge among peers. As a result, programming languages are not so much tools for instructing computers but also a means for human collaboration, fostering creativity and innovation.

Moreover, having a computer execute the code acts as a check on the utility of the programmer’s notes. This execution serves as a validation of the thought process, ensuring that the concepts and logic so encoded are sound and functional. By identifying errors or inefficiencies in the code, programmers are encouraged to refine their ideas, consequently improving the quality of their thoughts. Thus, code is not so much a set of instructions for machines, but a valuable tool for human expression, communication, and growth.

I’ve not called myself a software developer for at least thirty years. That’s not to say I’ve stopped coding. Far from it. But the end in mind has changed. From “developing software” to “attending to folks’ needs”. Seems to me that latter frame offers far more potential for satisfaction – both for me and for those I serve – than coding ever did. See also: #NoSoftware and the Antimatter Principle.

Coding

After all these years, I still love coding (as in writing software).

It’s just that it’s tainted by the certainty that there’s so many other more effective ways of adding value and meeting folks’ needs.

Spending time coding feels so… self-indulgent.

Scope of Ignorance

Most of the developers and development teams I used to work with when I was a software development consultant had a relatively narrow view of the skills and knowledge necessary to be “competent developers”. Here’s an illustrative graphic:

Image

Generally, to make progress on improving things, and to earn the moniker of “software engineers”, a wider scope of skills and knowledge was necessary. Not only did these development teams lack this wider scope, they were both ignorant of the many additional areas of knowledge and resistant to learning about them. The common response was “What are all these strange topics, and NO WAY! do we need to know about them”:

Image

Aside: Now I’m an Organisational Psychotherapist, their ignorance is no issue – and no stress – for me. They can learn or not learn in their own time. Progress is on them (and their higher-ups).

– Bob

Excolat in Pace

There’s a common idea which has been doing the rounds, ever since development (coding) first became a thing. We might sum it up as:

“Developers just want to develop in peace.”

As someone who spent more than a decade in the development trenches (and still does development today, occasionally), I can instantly relate to this issue. Indeed, focus is the key question. Any kind of interruption or distraction whilst reading or writing code can suddenly evaporate the evolving mental model of the inner workings of that code, a model built up painstakingly, with deep concentration, over twenty minutes or more. So three for four interruptions or distractions, however trivial, can wipe out an hour of otherwise productive effort. And that’s before we get to the question of frustration, the impact of frustration-induced stress on the individual, and the stress-related impairment of cognitive function more generally.

On the other hand, having developers separated from the folks that matter introduces other productivity-sapping dysfunctions, such as misunderstanding folks’ needs, building the wrong things, and reducing the joy of getting to see how the developers’ efforts make a difference to others.

Conundrum

So, how to ensure developers have the peace they need to focus intently on their coding efforts, whilst also ensuring they have sufficient interactions with the folks that matter – sufficient to ensure that needs are understood and the right solutions get built?

In the past, specialist intermediaries a.k.a. Business Analysts and Project Managers have served to address this conundrum. And solutions (including the role of specialists, and the workplace environment) have been imposed on developers without much consultation. Rarely have developers, or the folks that matter, been involved in finding a way forward together.

Personally, and in the context of self-managing teams in particular, I’m all for the teams and their customers (both internal and external) getting together and thrashing out a way forward. And then having regular check-ins to improve those ways of working together.

As an example, BDD (Behaviour-deriven development) is a current set of practices that offers one such way forward. Customers and suppliers sitting down regularly (as often as several times a day, for maybe twenty minutes at a time) and working through a User Story, Scenario, or Use Case, together.

And let’s not forget that the other folks involved, aside from the developers, also have their day jobs – jobs which require them to focus and spend time on things other than working with the developers.

How do you, your teams, and their folks that matter, propose to tackle this conundrum? How are you handling it at the
moment?

– Bob

Postscript

Another option that comes to mind is mob programming a.k.a. mobbing, particularly with the involvement of folks having in-depth domain knowledge (i.e. customers and users).

The Future of Coding Environments

How would Scotty or Geordie go about writing code for the Starship Enterprise? Would they write code at all? Would they just interact with the Computer via speech or holodeck, or would a keyboard of some sort still have a place? 

In any case, my interests have always stretched beyond matters of organisational effectiveness, beyond matters of human and humane relationships, and beyond matters of how the work of software and product development might better work.

One of my other abiding interests has been the nature of programming. Indeed I spent more than two years, decades ago, on conceiving, designing and implementing a proof of concept for the kind of development environment I’d like to use myself, when writing code. At the time, the work was codenamed “Simplicity”.

My core feature set / wish list includes: 

  • Editing source code directly in the AST, rather than editing source code in text files
  • Direct and incremental compilation of source code as it’s being entered
  • Multiple coders editing in the same AST concurrently
  • Live editing of the AST “in production” (with appropriate safeguards built-in)
  • One homogenous AST for each entire (live production) system
  • Source code control / version control features built right in (and automated away from distracting the coders)

OK, so this may not be the kind of development environment Scotty or Geordie would recognise. But it’s a world away from all the crap we have to put up with today.

Blockers

So why don’t we see more movement towards the emergence of some of these features in our development environments today? In a word: conservatism. Developers en masse seem disinclined – or unable – to look anew at their tools, and dream.

“The future is a foreign country; they do things differently there.”

– Bob

Further Reading

The Mjølner Environment ~ Görel Hedin, Boris Magnusson