The A-Z of Code Craft – O is for Outside-In

Image

When I teach modular software design, I proffer four qualities of well-designed modules.

Well-designed modules…

  1. Have one reason to change
  2. Hide their internal workings
  3. Have easily swappable dependencies
  4. Have interfaces designed from their user’s point of view

That fourth one opens a smorgasbord of successful software design techniques – and not just module design – dating back to the beginnings of software engineering.

When considering the design of software modules – at any level of granularity from methods and functions all the way up entire systems – we’ve learned that an effective approach is to ask not “What does this module need to do?”, but “What does that user need to tell it to do?”

Use cases are one example of approaching design from the outside; from needs and goals of users, rather than the features and behaviours of systems. Test-Driven Development is another example where design begins with a user outcome (and that user could be another module, of course).

It’s not magic. When we start design by considering how modules and systems will be used (and we could look at modules as mini-systems in their own right – it’s turtles all the way down), we are usually led to designs that are useful.

In both use case-driven design, and TDD, the internal design of modules is driven directly by that external point of view. We begin by defining the desired user outcome (the “what”). We don’t begin by considering the internal design details (the “how”). The “how” is a consequence of the “what”, and design flows in that direction – from the outside in. (For example, working backwards from failing tests to implementation design.)

The reverse approach, where we design the pieces of the jigsaw and then try to put the pieces together at the end (“inside-out” design) has proven to have considerable drawbacks:

  1. The wrong implementation code
  2. Jigsaw pieces that don’t fit
  3. Test code that bakes in the internal design

When we define the shape of the jigsaw pieces first, from the user’s point of view, their implementations are guaranteed to fit.

This was the original intention of Mock Objects. They can serve as placeholders for internal components that don’t exist yet. So when we’re writing a test for checking out a shopping basket, and we know that something will need to send a shipping note to the warehouse, but we don’t want to think about how that works yet, so we can “mock” a warehouse interface as a dependency of the module we’re testing. That mock, and our expectations about how it should be used by the checkout module, define a contract from that external point of view.

When we get around to implementing the warehouse module, its interface is already explicitly defined from the user’s point of view.

Outside-In design could be more accurately described as “usage-driven design”. It is working backwards from the user’s needs.


If you’re serious about building your team’s capability to rapidly, reliably and sustainably evolve software to meet rapidly changing business needs, visit codemanship.co.uk for details of high-quality hands-on training mentoring for software developers.

The A-Z of Code Craft – K is for K.I.S.S.

Image

If the value created by code was a superhero, complexity would be its Kryptonite. The more complex our code is, the longer it takes to deliver, the harder it is to test, the more likely it is to have bugs, the harder it is to understand, and the more it costs to change.

For all these reasons, we should strive to keep our code as simple as possible (and no simpler). The design mantra we recite is “Keep It Simple, Stupid!” (KISS).

We might think that simpler code will be easier to write, but it turns out that, quite often, simpler is harder.

It takes more thought. It takes more discipline. And it requires us both to let go of preconceived ideas about design, and to let go of our egos. A lot of complexity in code is accidental – we just didn’t think of a simpler way. But a lot is also deliberately created to impress, in the mistaken belief that more sophisticated code means a more sophisticated coder.

And, let’s be honest now, a lot of code just isn’t needed to provide the user outcomes we want to achieve.

In the equation of “value created – cost”, every additional line of code, every additional branch, every additional abstraction, every additional dependency erodes the profit margin of the work we do.

The aim of the game of software development is to create maximum value at minimum cost, and we therefore need to be both always seeking more value, and always seeking the simplest route to unlocking it. What’s the least we can do that will work?

So, the best code crafters continuously monitor the complexity of the software, continuously seek feedback on code quality, continuously refactor to remove accidental complexity, continuously question whether complexity is really needed, and – most important of all – continuously keep one eye on the prize.


If you’re serious about building your team’s capability to rapidly, reliably and sustainably evolve software to meet rapidly changing business needs, my Code Craft and Test-Driven Development live remote training workshops are HALF PRICE until March 31st 2025.

The A-Z of Code Craft – I is for Iterative

Image

A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work.

John Gall, Systemantics: How Systems Work And Especially How They Fail (1975)

Software developers have known since pretty much the start that getting a solution of any appreciable complexity right in a single pass is nigh-on impossible.

That should come as no surprise. The chances of one line of code being spot-on might be reasonably good. But 100 lines? 100,000 lines? 1 million lines? The odds are so stacked against us that they’re effectively zero.

We like to think of software as machines, but the complexity of modern software systems is more comparable to biology. We’ve never built machines with that many moving parts.

Nature has a tried and tested way of solving problems of this level of complexity, though: EVOLUTION.

As Gall notes, we don’t start with the whole all-singing, all-dancing version. We start simple – as simple as possible – and then we iterate, adding more to the design and feeding back lessons learned from testing to see if the software’s fit for purpose. Importantly, every iteration of the software works (and if it doesn’t, git reset –hard). Complexity emerges one small, simple step at a time.

When we approach it like this, the emphasis in software developments shifts profoundly, from delivering code or delivering features, to learning how to achieve users’ goals and solve problems. This is why frequent small releases of working software, designed in close collaboration with our users, is so very important.

It’s also why the most effective teams are always keeping one eye on the prize, continuously – there’s that word again! – revisiting the goals and asking “Did we solve the problem?” If the answer’s “No”, and it usually is, we go round again, feeding back what worked and what needs to change into the next small release. The faster we iterate, the sooner we solve the problem.

Rapid iteration of solutions is no small ask, though. If want to put working software in the hands of users, say, once a day, then that software needs to be tested and integrated at least once a day (and probably many times a day). Once we pull on that thread, a whole set of disciplines emerge that some of us call “code craft”.

And so here we are!


If you’re serious about building your team’s capability to rapidly, reliably and sustainably evolve software to meet rapidly changing business needs, my Code Craft and Test-Driven Development live remote training workshops are HALF PRICE until March 31st 2025.

The A-Z of Code Craft – D is for D.R.Y.

Image

“Don’t Repeat Yourself” is a widely misunderstood, often misapplied, and consequently much-maligned principle in the design of software.

While it’s true that repetition in code can hurt us, by multiplying the cost of change, it’s by no means the worst thing we can do. Indeed, sometimes repetition can help us if it makes code easier to understand. (If you refactor code to remove duplication, stop to ask if that’s made it harder to follow. If it has, put the duplication back!)

But that’s not what D.R.Y. is really about. Think of it this way: what’s the opposite of duplication? REUSE.

When we see multiple repetitions of a similar thing – be it copied-and-pasted code, or a repeated concept that appears in multiple places (I remember one application that had 3 Customer tables in the database, each created by different people for different features) – that’s a hint about what our design needs to be.

When we refactor to consolidate, we discover the need for reusable abstractions like parameterised functions or shared classes. Duplication points us towards potential modularity.

This is an evidence-based approach to design. We don’t speculate that a function might be reused, we see where it will be reused; we see the need for it in the current code.

Duplication in code can act as bread crumbs leading us to a better design and to genuinely useful – because they’re being used – reusable components. Removing duplication is where some of our most popular libraries and frameworks came from.

As for taking it too far, it’s certainly true that jumping on too quickly can produce over-abstracted code, and a much higher risk of choosing the wrong abstractions. The more examples we see, the more likely an abstraction is to be both the right one, and to actually pay for itself in the future

But let the duplication build up, and the refactoring’s going to take longer. In the zero-sum game of software development, things that take longer are less likely to happen, so we need to strike a balance.

The “Rule of Three” is a rough and ready guide for how many examples we might want to see before we refactor. Sometimes more, sometimes fewer, but on average, around three.

Scale is also a factor here. Reuse creates dependencies. If those cross team boundaries, it really needs to be worth it.

Don’t forget, either, that repetition also applies to not just our code, but our process for creating it. Automating repeated tests (regression tests) is a good example how refactoring duplication of effort in our process can streamline delivery.

Be mindful, though, that just as over-abstraction is a risk in refactoring duplication code, over-zealous automation is a risk in refactoring duplicated effort. I’ve worked with teams who have so many scripts and custom tools that it takes weeks or even months for new joiners to get up to speed, and some of those tools saved them less time and money than they took to create and maintain.


If you’re serious about building your team’s capability to rapidly, reliably and sustainably evolve software to meet rapidly changing business needs, my Code Craft and Test-Driven Development live remote training workshops are HALF PRICE until March 31st 2025.

The Rule Of Three Kata

Image

A widely-known – and even more widely misunderstood – principle in software design is Don’t Repeat Yourself (“D.R.Y.”).

Duplication in code can be a problem, because it can multiply the cost of making changes to the repeated parts. But it’s by no means the worst thing we can do in code. I’ll take code that’s easy to understand over code that has zero duplication any day. (And I’ll take code that works over code that’s easy to understand, too.)

So, although duplication is listed in many sources as a code smell, it’s importance is perhaps sometimes overstated in the maintainability stakes. But D.R.Y. serves a purpose in the actual design process itself.

Think about it: what’s the opposite of duplication? Reuse. When we see repeated examples of code, they can act as a signpost towards some kind of modular, reusable replacement. Repeated blocks of code could become a parameterised function of method. Copied and pasted groups of functions or methods could become a shared module or class.

By paying attention to duplication and refactoring to consolidate it, modular abstractions emerge in our code; shared functions/methods, shared modules/classes, polymorphism, and so on.

It’s argued by some, like Kent Beck, that pulling on that string of duplicated code to allow a modular design to emerge is a more evidence-based, “scientific” approach to design. We introduce abstractions, not because we think we might need them (another code smell called “Speculative Generality”), but because we see multiple examples where we do need them in the working code as it presently is, and not as we imagine it might be or should be.

The skill in following duplication to a modular design is in seeing the patterns. And, here, it serves us to see more examples so we’re more likely to choose the right generalisation, encapsulated in the right abstraction.

But leave it too long – let the repetition go on and on – and we face another problem, which is that any refactoring we do to consolidate it will take longer and longer. In the zero-sum game of software development, where we have limited time and resources, things that take longer (and/or cost more) are less likely to happen.

So we want to wait until we see enough examples, but not wait too long that we might not ever get around to refactoring it. This balance is captured in The Rule Of Three. On average (i.e., not always), we wait to see three examples of repetition before we refactor. Could be more, could be fewer, but on average, three.

The other thing about The Rule of Three is that, when we see something repeated once, the odds of it being repeated again (and again) are quite small. When we see code (or a concept – a much longer blog post!) repeated three or more times, chances are higher that it will be repeated again. In the coding kata that this exercises I’m going to introduce is based on, you should see things really speeding up after you’ve refactored out the duplication in the solution code.

And, of course, code isn’t the only thing we repeat in software development. The process itself can be full of examples of duplicated effort. For example, I might manually deploy my web application every day. The stuff I’m doing at the command line – stopping servers, deleting old folders, copying new files across, running database updates, restarting servers, etc – I could do with a batch script so it becomes an automated single-click process. If the time and effort involved in automating deployments is significantly outweighed by the time saved doing deployments every day, that’s a profitable venture.

In the same way that duplicated code can signpost a better modular design, duplicated effort can point us more scientifically to a better process. And in this sense, it’s useful to be aware of where that duplicated effort’s occurring. Time and Motion studies kind of thing.

Anyway, here’s a coding kata that exercises your Rule Of Three senses. It’s based on a well-known kata that’s good for practicing spotting and removing duplication in solution code, but we’re going to expand on that.

The problem you’re going to write code to solve is the Roman Numerals kata. Ordinarily, developers tackle this exercise by writing their tests first (TDD). But in our version, we’re going to go test-after. Like in the bad old days. But still working in micro-feedback loops. So, write a little bit of code – change or add one thing – then test it.

For example, write a function that converts 1 into “I”, then test that. Then change the function to turn 2 into “II” and then perform both tests. Then change the function to turn 3 into “III”, and then test all three cases. And so on. So, baby code-test steps. And, of course, if you see a pattern of repetition in your solution code, consider refactoring once you’ve seen three complete examples of that pattern. (Pro tip: don’t jump in too soon, the pattern’s larger than you think!)

The Rule Of Three kata has… well… three rules:

  • When you’ve performed the same test – e.g., 1 = “I” – at least three times, automate that test in a main() function or method so you can run it with a single command in a terminal, and easily inspect the result in your console window (test name, pass/fail, expected result, actual result). DO NOT USE A TESTING FRAMEWORK. The goal here is to discover a framework by consolidating duplication.
  • When you’ve written at least three automated tests, refactor repeated test code into a shared set of abstractions (e.g., shared functions) to remove that duplication, and then carry on
  • If you do this exercise alongside other developers or pairs (highly recommended), when three of you have your own set of shared testing abstractions, compare and consolidate them into a single shared testing library that you all use. Then carry on. You may find, as the number of automated tests grows further, that more evolution of the testing framework will help you to Don’t Repeat Yourself. So we may be looking at some kind of trunk-based concurrent development here 🙂

Rookie Mistake: Minimising “Interruptions”

Image

We all know that feeling when we’re deep in concentration, lost in our mind palaces, and some absolute sod dares to talk to us and completely breaks our flow. The natural response is to try to minimise those kinds of interruptions. But that would be solving the WRONG PROBLEM.

The idea of deep work, or “flow”, is pervasive in software development. The received wisdom is that programming requires a Zen-like state of concentration where the problem is completely contained in our heads and there is only the problem. We are not thinking about anything else. And this state takes time to achieve, a bit like how a computer program takes time to load into main memory.

Image

Anything that stops the program – breaks our state of concentration – forces us to load it all over again, which loses us time – typical estimates put it at 20-30 minutes of time “wasted” with each interruption.

It’s understandable therefore that developers – and the people who pay us – seek to minimise those interruptions to maximise deep work or “flow”. But this is to fundamentally misunderstand what software development is. To the layperson or the inexperienced, software development is “coding”, and we maximise the value created by software developers by maximising “coding”.

But when we actually observe teams, we may notice that the more “coding” goes on, the more the effect is typically the opposite. One line of code can literally be worth millions of times more than another. We maximise the value of what we create not by maximising code, but by maximising the value of code.

And how do we maximise the value of the code we write? By hiding in our cubicles, sticking on our headphones and putting a big “Do Not Disturb” sign up? Or do we maximise the value of our code by TALKING TO PEOPLE? Talking to customers. Talking to users. Talking to the ops team. Talking to each other.

When we set out to minimise “interruptions”, we risk minimising the COMMUNICATION that helps steer us down the right path. We’re minimising value.

Then there’s the other purpose of “interruptions”, which is to keep us all in sync – to keep us on the same page. In software development, having everyone on the same page is especially important, because when programmers working on the same code base aren’t in sync, that can break a bunch of stuff.

As common as our hatred for interruptions is, that can be greatly outweighed by our loathing of that person on the team who merges 1,001 changes that weren’t discussed with anybody else. Far from making the team more productive, that can often lead to major rework down the road, costing us more time than the so-called “interruptions” that could have kept us in sync.

When it comes to maximising value, communication is key. When it comes to avoiding train wrecks, communication is key.

The challenge for many teams is how to have their cake and eat it. How do we achieve a state of flow, and be continuously communication? It’s a paradox, surely?

Well, no. Think back to my analogy of loading a computer program into main memory. If the program takes up 10 GB of memory, it could take quite some time to load, so stopping the program every few minutes would indeed be a major headache. But if that program only took up 10 MB of memory, or 10 KB, then reloading it on today’s machines wouldn’t take much time at all. We’d barely notice.

Image

When it comes to this state of flow, a big factor is COGNITIVE LOAD – how much do we need to fit in our heads at one time, how many plates do we need to keep spinning?

Developers who work in very short feedback cycles – code a bit, test it, code a bit more, test it – often find that interruptions aren’t a problem. They make one or two small changes at a time, which require them to hold a lot less in their heads.

This is especially true when they combine those baby steps with some kind of high-level road map – a list of tests, or a high-level design, or a UI storyboard, for example. This enables them to keep their place in a bigger picture without having to hold a detailed map in their heads. They’re not doing the high wire act that many developers are doing when they make a whole bunch of changes before seeing the code work again.

By dramatically reducing cognitive load as they work, they’re able to communicate much more frequently, and stay much more closely in sync with the rest of the team.

This is one of the biggest payoffs for technical disciplines that maximise opportunities for communication, by working in very short feedback cycles that bring us back to tested, working software many times an hour.

Easier said than done, of course. This isn’t a way of working you can just switch on. It’s a whole skillset that needs to be learned, a set of habits that need to be instilled, and a dev culture that needs to be cultivated. That costs significant time and money, which is why most organisations choose to stick with the proverbial “Do Not Disturb” sign.