The A-Z of Code Craft – Q is for Quality

Image

Gosh! Where to start with “quality”? Okay, let’s nip this one in the bud. I’m not going to be talking about testing.

I suppose an interesting place to start might be with the word’s meaning.

“The standard of something as measured against other things of a similar kind; the degree of excellence of something.”

This definition of “quality” is the one we reach for when we say things like “It’s a high-quality hotel” and “Feel the quality of this leather”. It suggests that something is better than other things of the same purpose or nature.

And when many people think of “code craft”, this is often the picture they have in their mind: a “master craftsperson” making superior products with skill and care.

But it’s a little too vague to be a useful definition. “Superior” in what ways?

Which brings us to a second definition:

“A distinctive attribute or characteristic possessed by someone or something.”

Here, there’s no concept of “better”. “He had a shifty quality about him” is not a glowing review. “The chicken had a rubbery quality” is not a 5-star recommendation. We’re just describing an attribute or a property. Whether that atribute or property is good or bad, or better or worse, is very much in the eye of the beholder. Hey, maybe you like your chicken rubbery!

In the context of software, there are infinite qualities we can measure and describe: lines of code, number of features, maximum concurrent users, downtime, meantime to failure, difficulty for users to learn, and so on. We can go on (and on) describing software in its infinite dimensions until the cows come home.

But not all qualities are things we might care about, or that our customers might care about, or that end users might care about, or that industry regulators might care about.

So we have to choose what qualities we’re going to focus on. We have to decide what’s important to us. Because, and here’s the funny thing, when we set our sights on a quality, it has a tendency to be manifested. You get what you measure. (Or “Be careful what you wish for!”)

Arguably, the most elusive and notoriously mercurial quality of software is its value. The industry’s still stuck in an old-fashioned, one-dimensional mindset about the value of software products and systems, namely money. Chasing a single number in a complex world is typically a recipe for dysfunction. The best example in recent history is “shareholder value”, the invention of which could now be considered potentially an extinction-level event.

“People with targets and jobs dependent upon meeting them will probably meet the targets – even if they have to destroy the enterprise to do it.”

W. Edwards Deming

But some organisations have started to take a more balanced view. What, for example, is the impact of a software product on the reputation of a business? What contribution does a product make to communities the business interacts with? How does a product help or hinder in attracting the best hires?

The most effective approaches to defining quality and designing strategies for achieving it take a balanced view, considering multiple perspectives, and building and testing theories about how one quality relates to others. This helps teams avoid falling into the trap of lowering the cost of making the proverbial cakes at the expense of, say, customer satisfaction and future sales.

When I’m leading the team – when it’s up to me, basically – our development process, as well as our approach to improving the development process, is driven not by an idea for a product or a solution, but by a set of goals that take into account a balance of needs. Call it “outcome-oriented”, “goal-oriented” or even “problem-oriented” development.

We don’t start by describing the software or the system. We start by describing how the world will be different with the software in it. And we expect our understanding of that to change as we learn more through releasing software into that world, just as we expect that world itself to change because of the software.

In this sense, the software isn’t an end in itself, but part of a wider and continuously evolving strategy. The original sin of software development management, from which all the other sins flow, is failing to involve the developers in the formulation of that strategy – failing to make them stakeholders in the outcome – and failing to give them a reason to care about quality.


If you’re serious about building your team’s capability to rapidly, reliably and sustainably evolve software to meet rapidly changing business needs, visit codemanship.co.uk for details of high-quality hands-on training mentoring for software developers.

The A-Z of Code Craft – P is for Precondition

Image

Teaching continuous testing and Test-Driven Development, I spend a lot of my time thinking about post-conditions. These are the expected outcomes of actions that we assert at the end of our tests.

In an online store, we might assert that the post-condition of buying an item is that the quantity purchased is deducted from that product’s available stock, so we don’t sell stuff we don’t have.

In a made-up Python-like language, We might assert something like:

product = Product(name="Acme Widget", price=9.99, stock=10)
item = OrderItem(product=product, quantity=2) 

item.buy() 

assert(product.stock == 8)

Reasonable enough. But a focus on post-conditions can have an interesting side-effect when we start to consider edge cases. What happens when there isn’t enough stock?

When we solve this problem with an outcome, we may choose something like:

product = Product(name="Acme Widget", price=9.99, stock=1)
item = OrderItem(product=product, quantity=2) 

assertRaises(InsufficientStockError, lambda: item.buy())

And we might pass this test by adding a guard condition to the buy() method:

def buy(self):
    if self.product.stock < self.quantity:
        raise InsufficentStockError("Sorry, only " +         self.product.stock + " available.")
    
    self.product.stock -= quantity

When we consider the design at this internal level, it all looks pretty sensible. Except… Well, is it?

It’s very easy – and very common – to lose sight of how the system should handle this scenario. That error, and its message? That’s part of the user’s experience. It’s UX design, buried in a guard condition in our core business logic.

Something higher up the call stack has to catch this error and then decide how the system should handle it. This is a conversation we should have had with our customer.

And I don’t know about you, but I get annoyed by software that let’s me do things then says “Sorry, no can do!”, like it’s telling me off. BAD USER! Why did you select a quantity we don’t have?! (Like when ATM’s offer you a choice of withdrawing £10, £20, £30, and when you select £30, it tells you only £20 notes are available. Grrr! BAD UX!!)

The key to a better user experience, and to simplifying our core business logic, is not to offer the user choices the system can’t fulfil. We need to shift our design focus from post-conditions to preconditions.

A precondition determines if an action can be performed. In the context of our buy() action, the precondition might be that we have sufficient stock:

item.quantity <= item.product.stock

If the precondition isn’t satisfied, we shouldn’t give the user the choice to buy. In our UI design, we could disable the “Buy” button. Or offer an alternative action, like “Pre-order”.

If, at the system level, we don’t allow the Buy action, there’s no need to guard against that scenario in our core logic, which significantly simplifies the code.

Of course, if buy() wasn’t an internal method, but instead was, say, part of a web service, we’d need to check because we don’t necessarily control the client code.

But if the client code is also our code, and we control it, then calling buy() when there’s not enough stock is a programming error, not a user error.

The remedy for that isn’t an “Oops, something went wrong!” message. The remedy for programming errors is to take more care over programming. I hear testing is quite the thing these days.


If you’re serious about building your team’s capability to rapidly, reliably and sustainably evolve software to meet rapidly changing business needs, visit codemanship.co.uk for details of high-quality hands-on training mentoring for software developers.

It’s The Bottlenecks, Stupid!

Image

It’s the top question I get asked. “What’s your AI strategy?” people demand to know.

And as I steer my taxi cab into their driveway, I tell them.

My AI strategy was to spend 2 years investigating various claims about just how “night and day” better the latest model was compared to that last one that had disappointed me, only to discover that the shiny new one was almost as disappointing. And that “almost” has been getting smaller and smaller with every new generation.

I’m satisfied now to just sit back with a nice glass of wine and watch from the sidelines. I don’t feel any urgency to “get with the latest” large language doodad or agentic watsimyjig, because I see teams using them every week, and the things that slowed them down 2+ years ago are still slowing them down.

If anything, “AI” code generation appears to exacerbate the downstream bottlenecks like merge conflicts (bigger changesets, see), testing (bigger changesets, see), code reviews (bigger changesets, see), technical debt (bigger and sloppier changesets, see), waiting for management & product decisions (bigger changesets, see), and waiting for customer/user feedback (bigger changesets, see).

It’s like trying to fix the bottleneck at the ferry port by raising the speed limit on the roads that feed it.

“Coding” hasn’t been the bottleneck in software development since people were manually rewriting the connections between vacuum tubes.

If your development process is full of blocking practices like command-and-control management or after-the-fact testing or requirements and design handovers or Pull Request code reviews, then faster code generation will just make the bottlenecks worse and the lead times even longer. And the DORA data seems to back this up.

And I’ve been seeing this in teams throughout. I can tell if they’ve been using LLMs to generate or modify code by looking at the code. (Oh boy, can I tell!)

I can’t tell by looking at their delivery metrics.

As always, Big Tech has solved the wrong problem.

I just so happen to offer training and mentoring for dev teams in *non-blocking* software delivery practices that were shrinking lead times, improving reliability and sustaining the pace of innovation before “AI” coding assistants were glints in their inventors’ eyes.

And yes, the DORA data backs *that* up 🙂

Visit www.codemanship.co.uk for details.

Why Software Design At Any Level Needs Us To See The Whole

Image

It’s very common, and quite understandable, that developers will tend to reason about software design at the level they’re currently looking.

A classic example is throwing exceptions in core logic when a user action causes business rules to be broken.

We might decide that, say, customers can’t buy items we don’t have in stock. Makes sense, of course.

And when we’re just looking at the core logic, we have a tendency to handle that with a guard condition. When we’re out of stock, we throw an exception.

But do we consider how throwing that exception translates into *system* behaviour? Do we consider how it changes the user’s experience?

What might seem like the optimal design of a component within the system might turn out to be a sub-optimal design at the system level. Might it be better, instead of allowing users to click the Buy button and then be told “Oh, sorry. We don’t have that in stock”, which could get very annoying, to display that we’re out of stock and offer the option to pre-order instead?

As I say, I completely understand how easy it is to get bogged down in the details of the code we happen to be looking at, and end up considering part of the system as *the* system, especially when we don’t have visibility of the whole, or when we’re not privy to discussions about overall system design and user experience, which is why it’s a mistake to exclude developers from these discussions.

Ultimately, it’s *all* system design, and it’s *all* user experience. I teach teams about the “flow” of software design being most effective when it considers not “What does this component or module do?” but “How will it be used?”

Learn to recognise when design decisions you’re making in code, like using guard conditions and throwing exceptions, change the system behaviour and the user’s experience. That’s your cue to go back to the customer and ask “What should happen in this scenario?”

The alternative, as we’ve seen all too often, is that the system doesn’t handle those scenarios meaningfully, either unhelpfully displaying an unhandled exception message or even just a page saying “Oops, something went wrong!” At best, we get after-the-fact validation of our inputs and an experience that wastes the user’s time.

As users ourselves, we should know better.



Software design begins and ends with users and their needs. I train and mentor teams in “outside-in”, usage-driven design that starts not with “What software are we trying to build?”, but with “What problem are we trying to solve?”

Visit www.codemanship.co.uk to find out more.

The A-Z of Code Craft – O is for Outside-In

Image

When I teach modular software design, I proffer four qualities of well-designed modules.

Well-designed modules…

  1. Have one reason to change
  2. Hide their internal workings
  3. Have easily swappable dependencies
  4. Have interfaces designed from their user’s point of view

That fourth one opens a smorgasbord of successful software design techniques – and not just module design – dating back to the beginnings of software engineering.

When considering the design of software modules – at any level of granularity from methods and functions all the way up entire systems – we’ve learned that an effective approach is to ask not “What does this module need to do?”, but “What does that user need to tell it to do?”

Use cases are one example of approaching design from the outside; from needs and goals of users, rather than the features and behaviours of systems. Test-Driven Development is another example where design begins with a user outcome (and that user could be another module, of course).

It’s not magic. When we start design by considering how modules and systems will be used (and we could look at modules as mini-systems in their own right – it’s turtles all the way down), we are usually led to designs that are useful.

In both use case-driven design, and TDD, the internal design of modules is driven directly by that external point of view. We begin by defining the desired user outcome (the “what”). We don’t begin by considering the internal design details (the “how”). The “how” is a consequence of the “what”, and design flows in that direction – from the outside in. (For example, working backwards from failing tests to implementation design.)

The reverse approach, where we design the pieces of the jigsaw and then try to put the pieces together at the end (“inside-out” design) has proven to have considerable drawbacks:

  1. The wrong implementation code
  2. Jigsaw pieces that don’t fit
  3. Test code that bakes in the internal design

When we define the shape of the jigsaw pieces first, from the user’s point of view, their implementations are guaranteed to fit.

This was the original intention of Mock Objects. They can serve as placeholders for internal components that don’t exist yet. So when we’re writing a test for checking out a shopping basket, and we know that something will need to send a shipping note to the warehouse, but we don’t want to think about how that works yet, so we can “mock” a warehouse interface as a dependency of the module we’re testing. That mock, and our expectations about how it should be used by the checkout module, define a contract from that external point of view.

When we get around to implementing the warehouse module, its interface is already explicitly defined from the user’s point of view.

Outside-In design could be more accurately described as “usage-driven design”. It is working backwards from the user’s needs.


If you’re serious about building your team’s capability to rapidly, reliably and sustainably evolve software to meet rapidly changing business needs, visit codemanship.co.uk for details of high-quality hands-on training mentoring for software developers.