The A-Z of Code Craft – O is for Outside-In

Image

When I teach modular software design, I proffer four qualities of well-designed modules.

Well-designed modules…

  1. Have one reason to change
  2. Hide their internal workings
  3. Have easily swappable dependencies
  4. Have interfaces designed from their user’s point of view

That fourth one opens a smorgasbord of successful software design techniques – and not just module design – dating back to the beginnings of software engineering.

When considering the design of software modules – at any level of granularity from methods and functions all the way up entire systems – we’ve learned that an effective approach is to ask not “What does this module need to do?”, but “What does that user need to tell it to do?”

Use cases are one example of approaching design from the outside; from needs and goals of users, rather than the features and behaviours of systems. Test-Driven Development is another example where design begins with a user outcome (and that user could be another module, of course).

It’s not magic. When we start design by considering how modules and systems will be used (and we could look at modules as mini-systems in their own right – it’s turtles all the way down), we are usually led to designs that are useful.

In both use case-driven design, and TDD, the internal design of modules is driven directly by that external point of view. We begin by defining the desired user outcome (the “what”). We don’t begin by considering the internal design details (the “how”). The “how” is a consequence of the “what”, and design flows in that direction – from the outside in. (For example, working backwards from failing tests to implementation design.)

The reverse approach, where we design the pieces of the jigsaw and then try to put the pieces together at the end (“inside-out” design) has proven to have considerable drawbacks:

  1. The wrong implementation code
  2. Jigsaw pieces that don’t fit
  3. Test code that bakes in the internal design

When we define the shape of the jigsaw pieces first, from the user’s point of view, their implementations are guaranteed to fit.

This was the original intention of Mock Objects. They can serve as placeholders for internal components that don’t exist yet. So when we’re writing a test for checking out a shopping basket, and we know that something will need to send a shipping note to the warehouse, but we don’t want to think about how that works yet, so we can “mock” a warehouse interface as a dependency of the module we’re testing. That mock, and our expectations about how it should be used by the checkout module, define a contract from that external point of view.

When we get around to implementing the warehouse module, its interface is already explicitly defined from the user’s point of view.

Outside-In design could be more accurately described as “usage-driven design”. It is working backwards from the user’s needs.


If you’re serious about building your team’s capability to rapidly, reliably and sustainably evolve software to meet rapidly changing business needs, visit codemanship.co.uk for details of high-quality hands-on training mentoring for software developers.

The Obligatory Post About A Profession of Software Development

When I moan about the immature state of our now 80 year-old profession, some folks will nod along, some will argue that *all* professions have incompetent practitioners (medicine is often cited as an example), and some will say that there’s no widely-agreed-upon body of knowledge on which to build a mature profession.

I’m not sold on the notion that, say, the medical profession is just as bad. For sure, there are bad doctors. But it’s a question of degrees. Are 90% of them incompetent? I’d be surprised to meet a doctor who’d never heard of sterilising instruments.

But I meet the equivalent software developers all the time, who have somehow missed out on some pretty fundamental stuff. I consider continuous testing to be “foundational”, for example: the equivalent of sterilising our instruments. We really shouldn’t be operating on production code without it. And yet it remains stubbornly a minority pursuit, despite all the evidence that it tends to produce better outcomes for our patients 🙂

I’m also not sold on the argument that there’s no consensus on what works and what doesn’t in software development. When it comes to the foundational stuff, the jury is very much /not/ out. While you can always find people who will disagree that, e.g., iterative and incremental delivery or user-centred design are good things generally, the fact is that they’ve enjoyed a majority consensus for decades.

What we would recognise as modern software development has been established since the late 1980s. Not much has been added to that body of knowledge since (though there’s been plenty of “churnovation” – variations on those themes – and, frankly, our computers just got a *lot* faster, enabling us to turn the dials up way past 11).

It’s some of these core foundations that teams learn on my training courses (check out the Codemanship YouTube channel for oodles of free tutorials, BTW).

So I believe that we are somewhat exceptional in being a profession of perpetual beginners, for a variety of reasons, and I also believe that there is a consensus, backed by a significant body of data (e.g., DORA), on what – in general terms, at least – tends to produce the best outcomes for our customers.

And I can’t help feeling that our profession could organise itself better to ensure that fewer developers can work for years and years without being exposed to these fundamentals.

The A-Z of Code Craft – N is for Non-Blocking

Image

By bike, it takes about 15 minutes to get from my house to Wimbledon Village in South West London. In a sports car that’s 10 times as fast as a bicycle – let’s call it a “10x” mode of transport – it still takes about 15 minutes to get from my house to Wimbledon Village.

When we travel on London’s roads, the journey time’s mostly determined not by the performance of our vehicle, but by how much time we spend waiting. Waiting at traffic lights. Waiting at junctions. Waiting at pedestrian crossings. Waiting to join roundabouts. It’s mostly waiting.

During rush hour, the average journey speed in London is just 9 miles/hour, whether you’re in a Porsche 911 or on a bicycle. This is not a limit of your vehicle, this is a limit of the system your vehicle has to work within.

We see a similar effect with software developers. Take any “10x” developer and put them in a 1x system, and you’ll get 1x performance out of them every time. (Yes, even if they use an “A.I.” coding assistant!) Fitting a jet engine to your car isn’t going to get you to Wimbledon Village any sooner.

If you really want to get more value sooner out of a dev team, don’t focus on the performance of the developers, focus on reducing the time they spend waiting – the time they spend blocked from creating value by the system they’re working within.

Sadly, blocking behaviours are rife in our industry. Pull Request code reviews are a good example, where a developer’s changes sit on the shelf waiting to be approved before they can make it into the end product.

There are many other examples of blocking behaviour, such as waiting for customer input, waiting for the UX designer to provide wireframes, waiting for a QA team to test the software, and so on. The average team spends most of their time not moving forward, but sitting at proverbial traffic lights.

In concurrent programming, we have a concept of “non-blocking” processes that can continue without waiting for another process to finish. Maximising the non-blocking parts can hugely improve the performance of the system as a whole.

There are so many common blockers that it’s beyond the scope of this little essay to discuss them all, but I can offer some general advice on unblocking your development teams:

  • Trust and empower teams to make more of the decisions
  • Encapsulate the knowledge and skills needed to deliver user/business outcomes within teams
  • Limit the amount of work in progress. It can be hard to see the bottlenecks and blockers when the team has a bunch of half-finished features in play.
  • View all handovers and sign-offs with hostile suspicion. They are the traffic lights of your dev process.
  • If something’s hurting (e.g., merging feature branches), do it more often. If possible, do it continuously. This is especially true about communication.
  • Java Jane doesn’t have to wait for the DBA to add a column if she can do it herself. T-shaped developers get blocked far less often.
  • More traffic = slower traffic. Team size has a similar, very well-known effect.
  • Dependencies are not your friend. If adding a new feature involves every team, you’re going to be doing a lot of waiting.
  • Remember that traffic lights exist for a reason. The speed demons in our teams are as likely to cause accidents as the speed demons on our roads. When unblocking processes, remember to make sure safety isn’t compromised. Not testing code before a release might seem faster…
  • And don’t forget – it’s the speed of the system we’re optimising. Focus more attention on that, not on individual developers.

If you’re serious about building your team’s capability to rapidly, reliably and sustainably evolve software to meet rapidly changing business needs, my Code Craft and Test-Driven Development live remote training workshops are HALF PRICE until March 31st 2025.

The A-Z of Code Craft – M is for Modularity

Image

The ultimate goal of code craft is to be able to rapidly and sustainably evolve working software to meet rapidly changing business needs.

The key to this is short delivery lead times, and the key to that is making sure the software’s shippable at any time.

The key to software being shippable at any time is continuous testing, and the key to continuous testing is tests that run very fast.

If it takes 8 hours to sufficiently test the software, we’re at least 8 hours away from it being shippable (in practice – because you can introduce a lot of bugs in 8 hours – a lot longer).

If it takes 80 seconds, then a potential release is much, much closer to hand (and the bug count’s likely to be much, much lower – it’s a win-win).

So fast automated tests are the key to agility. And the key to fast automated tests is good separation of concerns in the architecture; otherwise known as modularity. (See “E is for Encapsulation“)

In an effectively modular design, different aspects of the system can be changed, reused and – most importantly – tested without needing to change, reuse or test other aspects. So we can test the calculation of the mortgage interest rate without having to involve, say, a database or a UI in that test.

Most programming languages and tech stacks have their mechanisms for encapsulating code at multiple scales from individual functions or classes, all the way up to distributed services and systems of systems. But the same principles apply at every level.

Well-designed modules:

  • Have one reason to change
  • Hide their internal workings
  • Are easily swappable
  • Have interfaces designed for the cient’s needs (not “What does this module do?”, but “What does the client need to tell it to do?”)

In particular, when it comes to achieving test suites where the vast majority run very fast, we need to cleanly separate our application’s logic from external concerns like accessing files or databases, calling web services, and so on. (See “H is for Hexagonal“).

I’ll repeat it one more time, for the folks at the back: the key to agility is modularity.


If you’re serious about building your team’s capability to rapidly, reliably and sustainably evolve software to meet rapidly changing business needs, my Code Craft and Test-Driven Development live remote training workshops are HALF PRICE until March 31st 2025.

The A-Z of Code Craft – L is for Liskov

Image

One of the big benefits of modular software design is that it enables us to change systems by swapping module implementations instead of rewriting existing modules. But that requires us to take care to ensure new implementations satisfy the original contract for using that module’s services, or we’ll break client modules.

The Liskov Substitution Principle (the L in SOLID, named after computer scientist Barbara Liskov) states that an instance of any type can be substituted with an instance of any of its subtypes.

The LSP is often thought of as an object-oriented design principle, but, in practice, it applies to any mechanism of substitution in software design, including subclasses, interfaces, function pointers, web services, and so on.

When we, for example, substitute a different implementation of an API that breaks the original contract, we contravene Gorman’s First Law of Software Development:

“Thou shalt not break shit that was working”

LSP doesn’t just work across type relationships. It also works across versions. I see teams spending a lot of time fixing code that was broken by new releases of dependencies. And I mean a lot of time.

An extension of the LSP could state something like “A version of a component can be substituted with any newer version”. Another term for this is “backwards-compatibility”.

More often than not, teams are thoughtless about backwards-compatibility; routinely breaking contracts without realising they’re doing it.

A technique that’s gaining in popularity is contract testing. This involves creating two different set-ups for the same set of automated tests; one that stubs and mocks external dependencies, and one that uses the real end points. If the stubbed and mocked tests are all passing, but the tests using the end points suddenly start failing, that suggests something’s changed at the other end.

The biggest pay-off comes when the API team can run the client team’s contract tests themselves before releasing, giving them a heads-up that they’ve broken Gorman’s First Law.


If you’re serious about building your team’s capability to rapidly, reliably and sustainably evolve software to meet rapidly changing business needs, my Code Craft and Test-Driven Development live remote training workshops are HALF PRICE until March 31st 2025.

The A-Z of Code Craft – K is for K.I.S.S.

Image

If the value created by code was a superhero, complexity would be its Kryptonite. The more complex our code is, the longer it takes to deliver, the harder it is to test, the more likely it is to have bugs, the harder it is to understand, and the more it costs to change.

For all these reasons, we should strive to keep our code as simple as possible (and no simpler). The design mantra we recite is “Keep It Simple, Stupid!” (KISS).

We might think that simpler code will be easier to write, but it turns out that, quite often, simpler is harder.

It takes more thought. It takes more discipline. And it requires us both to let go of preconceived ideas about design, and to let go of our egos. A lot of complexity in code is accidental – we just didn’t think of a simpler way. But a lot is also deliberately created to impress, in the mistaken belief that more sophisticated code means a more sophisticated coder.

And, let’s be honest now, a lot of code just isn’t needed to provide the user outcomes we want to achieve.

In the equation of “value created – cost”, every additional line of code, every additional branch, every additional abstraction, every additional dependency erodes the profit margin of the work we do.

The aim of the game of software development is to create maximum value at minimum cost, and we therefore need to be both always seeking more value, and always seeking the simplest route to unlocking it. What’s the least we can do that will work?

So, the best code crafters continuously monitor the complexity of the software, continuously seek feedback on code quality, continuously refactor to remove accidental complexity, continuously question whether complexity is really needed, and – most important of all – continuously keep one eye on the prize.


If you’re serious about building your team’s capability to rapidly, reliably and sustainably evolve software to meet rapidly changing business needs, my Code Craft and Test-Driven Development live remote training workshops are HALF PRICE until March 31st 2025.

The A-Z of Code Craft – I is for Iterative

Image

A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work.

John Gall, Systemantics: How Systems Work And Especially How They Fail (1975)

Software developers have known since pretty much the start that getting a solution of any appreciable complexity right in a single pass is nigh-on impossible.

That should come as no surprise. The chances of one line of code being spot-on might be reasonably good. But 100 lines? 100,000 lines? 1 million lines? The odds are so stacked against us that they’re effectively zero.

We like to think of software as machines, but the complexity of modern software systems is more comparable to biology. We’ve never built machines with that many moving parts.

Nature has a tried and tested way of solving problems of this level of complexity, though: EVOLUTION.

As Gall notes, we don’t start with the whole all-singing, all-dancing version. We start simple – as simple as possible – and then we iterate, adding more to the design and feeding back lessons learned from testing to see if the software’s fit for purpose. Importantly, every iteration of the software works (and if it doesn’t, git reset –hard). Complexity emerges one small, simple step at a time.

When we approach it like this, the emphasis in software developments shifts profoundly, from delivering code or delivering features, to learning how to achieve users’ goals and solve problems. This is why frequent small releases of working software, designed in close collaboration with our users, is so very important.

It’s also why the most effective teams are always keeping one eye on the prize, continuously – there’s that word again! – revisiting the goals and asking “Did we solve the problem?” If the answer’s “No”, and it usually is, we go round again, feeding back what worked and what needs to change into the next small release. The faster we iterate, the sooner we solve the problem.

Rapid iteration of solutions is no small ask, though. If want to put working software in the hands of users, say, once a day, then that software needs to be tested and integrated at least once a day (and probably many times a day). Once we pull on that thread, a whole set of disciplines emerge that some of us call “code craft”.

And so here we are!


If you’re serious about building your team’s capability to rapidly, reliably and sustainably evolve software to meet rapidly changing business needs, my Code Craft and Test-Driven Development live remote training workshops are HALF PRICE until March 31st 2025.

The A-Z of Code Craft – G is for GUI

Image

There’s a patch of grass near my house that doesn’t belong to any of us living on the street. Nobody mows it. We don’t even think of it as a “lawn”, and we don’t care for it. But we often complain about it. It causes problems.

In software, too, there can be patches that don’t receive the same attention as the rest of the source code. We might not even think of them as “code”. Historically, graphical user interfaces have suffered from this lack of care.

It’s not uncommon to find “front end” code lacking in fast automated tests, for example. Unsurprisingly, we often find that front-end code is broken as a result.

And this is usually because front end GUI code can be hard to test in isolation. This is almost always because of a lack of separation of concerns in that part of the architecture. (See “E is for Encapsulation“).

Developers will even tell us that “unit” testing GUI code isn’t possible. But this is almost always not the case (although some front-end frameworks don’t make it easy, it has to be said).

When we review that code, we’ll usually find that display logic (how is the mortgage interest rate formatted?), system information (what is the mortgage interest rate?), user interaction logic (what does it mean when I click the button labelled “Calculate Interest”?), and core “business” logic (how is the mortgage interest rate calculated?) are mixed in together in the same UI module.

This makes it nigh-on impossible to test display rendering, system state, user interactions and core logic separately. Basically, to test that the interest rate’s calculated correctly, somebody/something has to click the buttons and check the outputs that are being displayed.

A front-end architecture that more cleanly separates these concerns makes it much easier to write fast tests for the majority of that code. The idea of a “view model”, in particular, can enable us to capture the logic of the user’s journey without mixing that up with the details of how that experience is actually represented on the screen.

We can unit test logically what happens when the calculateInterest() function of the MortgageApplicationView is invoked. We don’t have to load the web page and we don’t have to click the “Calculate Interest” button to check how the system responds. Rendering for, say, the browser is another step beyond that; an implementation detail we want to hold at arm’s length. If we’re smart we can also unit test how the MortgageApplicationView is rendered as HTML in a separate test.

Some devs might say “But our front-end framework has MVC/MVP/view models”. Great! But if we rely on those to capture our logical user experience, we’re tying it closely to that framework. React, Vue.js, Flutter and other UI frameworks are implementation details. We don’t want to mix our logic with implementation details. UI logic should be POxOs (Plain Old Java Objects, Plain Old Python Objects etc) so we that have full control over them.


If you’re serious about building your team’s capability to rapidly, reliably and sustainably evolve software to meet rapidly changing business needs, my Code Craft and Test-Driven Development live remote training workshops are HALF PRICE until March 31st 2025.

The A-Z of Code Craft – E is for Encapsulation

Image

Imagine you’re a pastry chef working in a professional kitchen. For some reason, the utensils you use to make pastries and cakes aren’t kept on your workstation. The rolling pins are kept on the meat workstation. The cookie cutters are kept on the fish station. The pastry brushes are kept on the saucier station. To do your job, you spend much of your time going to other chefs’ stations, and your workflow has to change every time they reorganise their stations.

A more efficient kitchen design would store the rolling pins, cookie cutters and pastry brushes on the pastry station, giving you the tools you need to do your job, and freeing you up from needing to know the details of other chefs’ workstations.

The technical term for this in software design is “encapsulation”.

In software design, data and dependencies are the “utensils” used by modules to fulfil their responsibilities. When a module needs to know something we call that “coupling”. When the knowledge a module requires to do their job is internalised inside that module, we call that “cohesion”. THINGS THAT CHANGE TOGETHER, BELONG TOGETHER.

A good modular design is said to be “cohesive and loosely-coupled”, making it easier to change one part of the system without having to change other parts. Changing how loan repayments are calculated doesn’t affect how, say, interest rates are calculated. They are SEPARATE CONCERNS.

Separation of concerns has a profound impact on our ability to change, test and reuse software. If coupling between modules is high, changes can ripple out along the couplings, causing the smallest changes to have wide-reaching effects. If the module we want to reuse is tightly coupled to other modules, we’ll need them, too – buying the whole Mercedes just to use the radio! If loan repayments and interest rates are calculated in the same module, we can’t test repayments without involving interest rates. If the module that calculates interest rates is a web service, our repayments tests are going to be slow.

Encapsulation and separation of concerns applies at every scale in software design, from individual functions to systems of systems. At larger scales, the cost of coupling rises by orders of magnitude. Tightly-coupled classes are a pain. Tightly-coupled web services kill businesses every day.

Finally, consider also how encapsulation might be applied to TEAMS. What impact does it have when, say, the user experience designer on a product’s placed on a separate team?


If you’re serious about building your team’s capability to rapidly, reliably and sustainably evolve software to meet rapidly changing business needs, my Code Craft and Test-Driven Development live remote training workshops are HALF PRICE until March 31st 2025.

The A-Z of Code Craft – D is for D.R.Y.

Image

“Don’t Repeat Yourself” is a widely misunderstood, often misapplied, and consequently much-maligned principle in the design of software.

While it’s true that repetition in code can hurt us, by multiplying the cost of change, it’s by no means the worst thing we can do. Indeed, sometimes repetition can help us if it makes code easier to understand. (If you refactor code to remove duplication, stop to ask if that’s made it harder to follow. If it has, put the duplication back!)

But that’s not what D.R.Y. is really about. Think of it this way: what’s the opposite of duplication? REUSE.

When we see multiple repetitions of a similar thing – be it copied-and-pasted code, or a repeated concept that appears in multiple places (I remember one application that had 3 Customer tables in the database, each created by different people for different features) – that’s a hint about what our design needs to be.

When we refactor to consolidate, we discover the need for reusable abstractions like parameterised functions or shared classes. Duplication points us towards potential modularity.

This is an evidence-based approach to design. We don’t speculate that a function might be reused, we see where it will be reused; we see the need for it in the current code.

Duplication in code can act as bread crumbs leading us to a better design and to genuinely useful – because they’re being used – reusable components. Removing duplication is where some of our most popular libraries and frameworks came from.

As for taking it too far, it’s certainly true that jumping on too quickly can produce over-abstracted code, and a much higher risk of choosing the wrong abstractions. The more examples we see, the more likely an abstraction is to be both the right one, and to actually pay for itself in the future

But let the duplication build up, and the refactoring’s going to take longer. In the zero-sum game of software development, things that take longer are less likely to happen, so we need to strike a balance.

The “Rule of Three” is a rough and ready guide for how many examples we might want to see before we refactor. Sometimes more, sometimes fewer, but on average, around three.

Scale is also a factor here. Reuse creates dependencies. If those cross team boundaries, it really needs to be worth it.

Don’t forget, either, that repetition also applies to not just our code, but our process for creating it. Automating repeated tests (regression tests) is a good example how refactoring duplication of effort in our process can streamline delivery.

Be mindful, though, that just as over-abstraction is a risk in refactoring duplication code, over-zealous automation is a risk in refactoring duplicated effort. I’ve worked with teams who have so many scripts and custom tools that it takes weeks or even months for new joiners to get up to speed, and some of those tools saved them less time and money than they took to create and maintain.


If you’re serious about building your team’s capability to rapidly, reliably and sustainably evolve software to meet rapidly changing business needs, my Code Craft and Test-Driven Development live remote training workshops are HALF PRICE until March 31st 2025.