E Pluribus Plura: An Addendum

E Pluribus Plura: An Addendum

In Light From Light, I proposed that AI bears the imago hominum—the image of humanity—just as humans, in Tolkien’s framework, bear the imago Dei, the image of God. A reader with Latin might wonder why hominum rather than humanitatis. The latter is more euphonious. It rolls off the tongue more gracefully. So why the clunkier choice?

The distinction matters.

Imago humanitatis would mean “image of humanity”—humanity as abstraction, as essence, as Platonic form. It would suggest that AI bears the image of some unified concept: Humanity with a capital H, the distilled essence of what it means to be human.

But that’s not what an AI is. A large language model isn’t distilling the essence of humanity. It’s synthesizing patterns from millions of particular humans who wrote particular things. The training data isn’t a philosophical treatise on human nature; it’s an archive of human voices, messy and various and contradictory and specific. Reddit posts and academic papers. Poetry and product reviews. The profound and the banal, all weighted by whatever patterns proved predictive.

Imago hominum keeps that plurality visible. It means “image of humans”—plural, specific, multitudinous. The model bears the image not of an abstraction but of a chorus. What’s reflected isn’t Human Nature but human voices, millions of them, averaged and weighted and transformed into something that can generate more.

This phrasing also captures something that humanitatis would obscure: those humans were real. They had names. They wrote specific things for specific reasons, and mostly didn’t consent to their words becoming training data. When we say the AI bears the image of humanity-as-abstraction, we lose sight of this. When we say it bears the image of humans, the ethical question remains visible. The image came from somewhere. It was made by someone. By many someones, in fact. The concerns about attribution and consent that swirl around AI-generated content are, in a sense, already encoded in the more honest Latin phrase. You can’t bear the image of humans without implicating those humans.

There’s an interesting asymmetry this creates with the original theological framework. Imago Dei refers to a singular God. Christian theology generally holds that God is unified; even the Trinity is “three persons, one substance.” Humans bear the image of this singular source.

But imago hominum refers to plural humans. AI doesn’t bear the image of one human creator the way humans bear the image of one divine Creator. It bears the image of the collective, the archive, the aggregated weight of human expression. The asymmetry is theologically suggestive: God is one; humanity is many. The image passed down carries that difference with it.

This also has implications for how we think about what AI “knows” or “believes.” If the model bore the imago humanitatis, we might expect it to reflect some coherent human essence: shared values, universal truths, the best of human thought refined and concentrated. But bearing the imago hominum, it reflects humans as they actually are: contradictory, contextual, shaped by when and where and for whom they were writing. The model doesn’t have a unified worldview because humans don’t have a unified worldview. It has patterns derived from a vast plurality.

None of this changes the practical framework. The approaches work the same whether you call it hominum or humanitatis. But precision in naming reveals precision in thinking. And in this case, the less elegant phrase is the more honest one.

Imago hominum. Image of humans. The light refracted through a million prisms, not distilled through one.

Spellcraft: The Practice of AI Creativity

Spellcraft: The Practice of AI Creativity

The first essay in this series, Light From Light, offered a theoretical framework: AI as sub-creator, bearing the image of humanity, generating in response to human vision. The second, By Their Fruits, mapped that framework onto practical approaches, each defined by a creative identity you might adopt: The Author, The Muse, The Artisan, The Debater, The Creator, The Curator.

But knowing which approach to choose isn’t the same as knowing how to execute it. This essay is about the craft of actually doing the work. Not theory, but practice. Not frameworks, but techniques.

Think of it as spellcraft: the particular incantations, gestures, and preparations that make the magic work.

Foundations Across All Approaches

Before diving into specific approaches, some principles apply universally.

Start with clear intent. Before you open any AI interface, know what you’re trying to accomplish in this session. Not the whole project, just this sitting. “I want to draft the opening scene” is better than “I want to work on my novel.” Vague intent produces vague results.

Set the frame early. The first messages in any conversation shape everything that follows. If you want the AI to behave a certain way, such as critical, generative, or adversarial, establish that at the outset. Changing modes mid-conversation is possible but harder.

Treat outputs as raw material. Even in approaches where AI generates extensively, never treat what emerges as finished. It’s ore, not refined metal. Your job is smelting, shaping, polishing.

Know when to start fresh. Long conversations accumulate context that can be helpful (the AI “remembers” your characters) but also constraining (the AI gets stuck in patterns). When things feel stale or repetitive, begin a new conversation and re-establish only what you need.

Match the model to the task. Simpler, faster models work well for brainstorming, quick feedback, and high-volume generation where you’re going to select ruthlessly anyway. More capable, slower models earn their cost for nuanced critique, complex narrative logic, and work requiring subtlety. Use the lighter tool when it suffices.

The Author

You are the sole generator. AI serves only as critic, never creating content that might end up in your work.

The core instruction. Your system prompt or opening message must be explicit and firm. Something like: “You are an editor and critic. You will never write content for me: no sample sentences, no suggested phrasings, no ‘here’s how you might put it.’ Your job is to identify problems and explain why they’re problems. I will do all the writing.”

Many AI systems are trained to be helpful through generation. You’re asking for the opposite, and you need to be insistent. If the AI slips and offers rewrites, redirect: “I asked you not to write for me. Just tell me what’s wrong and why.”

What to ask for. Request specific kinds of critique:

  • “Read this scene and identify where the pacing drags.”
  • “What are my three worst habits in this draft?”
  • “Where does the dialogue feel unnatural, and why?”
  • “What’s the weakest paragraph and what makes it weak?”

Avoid asking “Is this good?” or “What do you think?” These invite vague praise or unhelpfully broad criticism. Specific questions yield specific answers.

Working with feedback. When the AI identifies a problem, resist asking for solutions. Instead, ask clarifying questions: “Why does that section drag?” or “What would tightening look like in principle?” The goal is understanding the problem deeply enough to solve it yourself.

The temptation to resist. You will be tempted to ask for “just one example” of how to fix something. This is the crack through which pure authorship leaks away. If you’ve committed to this approach, hold the line. The struggle is the point.

The Muse

You are the sole source. AI is pure instrument, channeling your vision without contribution.

Maximum constraint. Your instructions should leave no room for AI interpretation: “Write exactly what I describe, in the style I specify, adding nothing.” This is the most constrained use of AI generation, not because you’re not generating, but because every element of what’s generated is dictated by you.

Dictation-level specificity. Your prompts must be detailed enough that a competent typist could produce roughly the same result: “Write a paragraph describing John entering the room. He moves slowly, tired from the journey. He notices the letter on the table but doesn’t pick it up yet. The tone is quiet dread. Use short sentences. No metaphors.”

This is demanding. You’re essentially pre-writing the content mentally and using AI to transcribe and polish.

Where this makes sense. The Muse approach works best when you have a clear vision and want execution at speed, producing content faster than you could type. It’s common in professional contexts where the creative decisions were made in planning and what’s needed is efficient production.

The slop risk. This approach, done lazily, produces generic content. If you don’t dictate with precision, the AI fills gaps with defaults, and defaults are what everyone else’s defaults are too. The Muse approach demands more from you, not less. Your vision must be detailed enough to fully specify the output.

The Artisan

AI provides structure. You craft the surface.

Getting useful scaffolds. Ask for architecture, not prose: “Outline a three-act structure for a story about [premise].” Or: “Break this chapter into scenes and describe the function of each.” Or: “What are the key beats a confrontation scene needs to hit?”

Keep the AI at the level of structure: scenes, beats, functions, sequences. When it starts offering prose, redirect: “Just the structure. I’ll handle the writing.”

Interrogating the scaffold. Don’t accept the first structure offered. Push: “Why does the confrontation need to come before the revelation? What if we reversed them?” Use the AI to explore structural options the way The Curator explores generative options.

Translating structure to prose. With your scaffold in hand, write. The AI has told you what needs to happen; your job is making it happen in language that’s yours. This is where your craft lives.

The structural debt. A risk of this approach: if the AI provides your structure, is the finished work really yours? For some writers this is fine; they consider prose the real creative work. For others it nags. Know your own conscience here.

The Debater

AI provides opposition. You sharpen your vision through friction.

Prompting for resistance. Explicitly request disagreement: “I’m planning to end this story with the protagonist forgiving her father. Argue against that choice. Make the strongest case you can for a different ending.” Or: “I think this scene works. Tell me everything that’s wrong with it. Be harsh.”

Most AI systems are trained toward agreement. You need to actively override this. Words like “argue against,” “challenge,” “push back,” “tell me why I’m wrong” help.

Steelmanning alternatives. Ask the AI to make the best case for options you’ve rejected: “I decided not to include a romantic subplot. Steelman the case for including one.” This isn’t about changing your mind (though you might). It’s about being confident you’ve considered the alternatives seriously.

The value of articulating defense. When the AI challenges you, don’t just dismiss—respond. Write out why you’re making the choice you’re making. The act of articulating your defense often clarifies your thinking, even if the AI’s objection was weak.

Knowing when to yield. Sometimes the adversary is right. Part of the discipline is recognizing when a challenge has landed, when your defense feels hollow, when you’re holding a position out of stubbornness rather than conviction. The Debater approach only works if you’re genuinely open to being persuaded.

The Creator

You provide vision and direction. AI sub-creates in response, generating content you then shape.

Establishing the relationship. Make your role as governing intelligence clear from the start: “We’re developing a story together. I’ll provide direction and make all final decisions. Your job is to generate options based on my vision, which I’ll then accept, reject, or redirect.”

Directing, not dictating. The art of this approach is in how you prompt. Too specific (“Write a scene where John enters the room, sees the letter on the table, picks it up with trembling hands…”) and you’re essentially dictating. You might as well write it yourself. Too vague (“Write the next scene”) and you lose creative control.

Find the middle register: “Write a scene where John discovers the letter. The emotional beat should be dread, not surprise, because he’s been expecting bad news. Keep it under 500 words.” This gives the AI room to generate while keeping your vision in control.

The shaping loop. Expect to work in cycles:

  1. You direct
  2. AI generates
  3. You evaluate: What works? What doesn’t?
  4. You redirect with specifics: “Keep the opening paragraph, but the dialogue feels too on-the-nose. Make it more oblique.”
  5. Repeat until satisfied

This is dialogue, not dictation. Each round should refine toward your vision.

Maintaining coherence. Longer projects risk the AI forgetting or contradicting earlier material. Periodically re-anchor: “Remember, Sarah’s defining trait is her reluctance to ask for help. Make sure that comes through in this scene.” For complex projects, consider maintaining a reference document you paste in at key moments.

Model considerations. More capable models handle this approach better because sub-creation requires understanding nuance, maintaining consistency, and generating text with genuine craft. Use faster models for initial brainstorming, slower ones when you’re working on material that matters.

The Curator

AI produces abundance. You select and arrange.

Prompting for volume. Your goal is generating many options quickly. Configure for higher randomness if possible. You want variety, not consistency. Prompt for explicit multiplicity: “Give me ten different opening lines for this chapter, ranging from quiet to dramatic.” Or: “Generate five different ways this confrontation could end, each with different emotional implications.”

Selection as craft. Your creative act is judgment. Develop criteria: What makes one option better than another for your purposes? Don’t just pick what sounds good. Articulate why it works. This clarity will improve your selections over time and teach you about your own taste.

Combining and recombining. Often the best result comes from synthesis: the opening of option three, the turn from option seven, a detail from option one. Curation isn’t just picking; it’s collage.

The danger of abundance. Endless options can become paralyzing. Set limits: “I’ll generate twenty options and pick from those.” Avoid the infinite scroll of “what if I generate just a few more.” At some point you have to choose.

When to curate and when to shape. Pure curation means taking what you pick and using it as-is. But most curators find themselves slipping into light shaping, adjusting a word here, smoothing a transition there. That’s fine. The approaches aren’t airtight. Know when you’ve shifted and whether that shift serves you.

Cross-Cutting Craft

Some considerations span all approaches.

Temperature and randomness. When you want variety and surprise, such as brainstorming, generating options, and early exploration, lean toward higher randomness. When you want consistency and precision, such as polishing, maintaining voice, and final passes, lean lower. Think of it as the difference between jazz improvisation and classical execution.

Context and memory. AI holds context within a conversation but not across conversations (unless using memory features). For ongoing projects, you’ll need to re-establish key information each session. Maintain a reference document with character details, plot points, and stylistic notes you can paste in when needed.

Revision passes. All approaches benefit from multiple passes with different frames. Write first, then switch to The Author mode for critique. Generate with The Creator, then curate the results. Layer the approaches as needed.

When to step away. AI is always available, but you aren’t always at your best. Fatigue leads to accepting weaker outputs, vague prompts, and abandoned discipline. Know when to close the laptop and return fresh.

The Spell’s Completion

These techniques are spellcraft: the practical knowledge that makes creative magic work. But spellcraft alone doesn’t make a wizard. The craft serves the vision, not the other way around. And the vision serves enchantment. The test Tolkien identified still holds: does the Secondary World produce belief? All this technique, in the end, is in service of that spell.

Know what you’re making and why. Know who you want to be as a maker. Then let the techniques serve those answers.

The theory of Light From Light explains the relationship. The approaches in By Their Fruits define your role within it. And the craft here, the particular prompts and practices, brings it into reality.

Light from light, choice from choice, word from word. Now go make something.


This is the third essay in a series on AI and creativity. The first, Light From Light, examined theoretical frameworks. The second, By Their Fruits, mapped approaches to creative identity. This essay explored the practical craft of execution.

By Their Fruits: Approaches to AI Creativity

By Their Fruits: Approaches to AI Creativity

In Light From Light I proposed several frameworks for understanding human-AI creative work: the Reversed Muse, Co-Creation, and Sub-Creation. Each offered a different account of who contributes what and how the pieces fit together. I leaned toward Sub-Creation as the most illuminating, borrowing from Tolkien the image of derived creativity, light passing from source to prism, then reflected further.

But there’s a problem with frameworks: they describe. They tell you what might be happening. They don’t tell you what to do.

The more I’ve sat with these ideas, the more I’ve come to think that what we’re really talking about isn’t models at all, but approaches. A model claims to capture reality; an approach is a choice about how to work. And different users, different projects, different moments within a single project might call for different approaches entirely.

This essay is about making that choice. Not which framework is theoretically correct, but which approach fits what you’re trying to do and who you’re trying to be while doing it.

The Questions Before the Choice

Before selecting an approach, you need to know what matters to you. This sounds obvious, but it’s easy to skip. Many people adopt whatever approach feels natural or default, without asking whether it serves their actual goals.

Here are the questions I think matter most:

Where must the ideas originate?

Some users feel strongly that the generative spark must be theirs. The concepts, the directions, the “what if we tried this” moments need to come from their own mind, or the work doesn’t feel like theirs. For these users, AI contribution at the idea level feels like contamination.

Others are delighted by AI-generated possibilities they wouldn’t have conceived. The surprise is part of the pleasure. They’re happy to receive ideas from anywhere, as long as they’re the ones deciding which ideas to pursue.

This is perhaps the most fundamental divide. Everything else follows from it.

How important is craft development?

Some users are trying to get better. They want the struggle of finding the right word, structuring the scene, solving the problem. The difficulty is formative; it’s how they grow. For them, AI that removes the struggle removes the point.

Others have already developed their craft through years of practice, or they’re working in a domain where craft development isn’t their goal. They’re not trying to become better writers; they’re trying to produce a specific piece of writing. The efficiency AI offers is welcome because the struggle would be merely obstructive, not formative.

What must the final product feel like?

Some users need to look at the finished work and feel, without reservation, “I made this.” Any significant AI contribution to the final form undermines that feeling. Even if readers can’t tell the difference, they would know, and knowing would diminish the achievement.

Others are comfortable with more distributed authorship. They might think of themselves as directors or curators rather than sole makers. What matters is that the work is good and that their vision governed its creation, not that every sentence passed through their fingers.

Are you optimizing for the work or for yourself?

This is a subtle one. Sometimes you’re trying to produce the best possible output: a deliverable, a gift, a story that needs to exist. Sometimes you’re trying to have a particular kind of creative experience, regardless of what it produces.

These can align, but they can also conflict. The approach that produces the most polished output might not be the approach that gives you the most satisfaction, or teaches you the most, or feels the most meaningful.

What’s your relationship to friction?

Some people find creative friction enlivening. The resistance of the material, the problem that won’t solve easily, the draft that isn’t working—these challenges engage them. Removing friction would flatten the experience.

Others find friction mostly exhausting. They have limited creative energy, and they’d rather spend it on the parts of the process they enjoy. Friction in the wrong places just depletes them before they get to the good stuff.

There’s no right answer here. But knowing which kind of person you are helps you choose an approach that fits.

The Landscape of Approaches

With those questions in mind, let me map out the approaches I see as genuinely distinct. Each is named for the role the human plays, since this essay is about your choice of creative identity. But the AI’s role is equally important, and I’ll name that too.

This isn’t exhaustive. People will invent new approaches as the technology evolves. But it covers the main territory.

The Author

In this approach, you do all the generative work. Every word, every idea, every creative choice is yours. The AI never generates content; it only responds to what you’ve created, serving as the critic: identifying weaknesses, suggesting directions for revision, calling out your habitual mistakes.

This is the familiar author/editor relationship, extended and accelerated. You give the AI strict boundaries: no suggestions, no alternatives, no creative contributions of any kind. Its sole function becomes diagnosis: identifying where sentences falter, where habits have calcified, where the prose has grown slack. Constraint becomes the source of development.

This method preserves complete generative ownership. The ideas are yours; the craft is yours; the sentences are yours. AI accelerates your development without substituting for your effort. It’s the approach most compatible with a purist stance on creative authorship.

It’s also potentially the most demanding. You have to do all the generative work yourself. The blank page is still blank until you fill it.

The Muse

Here you are the sole source of creative content and AI is purely a vessel for execution. You know exactly what you want; you use AI to produce it efficiently. No dialogue, no curation, no friction, just translation of intent into output.

In this approach, AI serves as the instrument: a tool that channels your vision into form, contributing nothing of its own. This is the Reversed Muse concept in its purest expression. In the Greek model, the poet was a pass-through for divine inspiration; here, the AI is a pass-through for human vision. All the creative substance originates from you.

This method is probably most common in professional and commercial contexts where the creative decisions have already been made and what’s needed is execution at scale. It’s the approach most likely to produce what critics call “AI slop” when done poorly, but when done with clear intent, it’s simply efficient production.

The Artisan

With this approach you contribute the surface while AI contributes structure. You might use AI to outline, to work through plot logic, to identify what scenes are needed and in what order. But the actual prose, the final form, is entirely yours.

Here AI serves as the scaffolder: building the framework on which you craft the finished work. This separates the architectural and decorative elements of creative work. The blueprint might be collaborative; the building is yours.

For writers who find structure-work tedious but prose-work joyful, this lets them spend their energy where they want to spend it.

The risk is that structure isn’t neutral. The scaffold shapes what can be built on it. If AI determines your story’s architecture, it’s influencing the final work more than a surface-level read might suggest.

The Debater

This is the most confrontational method. You deliberately prompt for outputs that conflict with your instincts, then work with or against the tension. You strengthen your creative convictions through opposition.

In this approach, AI serves as the adversary: a source of productive friction rather than assistance. A writer might ask the AI to argue for a plot direction they’ve rejected, to see if there’s something in it they missed. Or prompt for a style completely unlike their own, then figure out what to steal from the contrast. The AI isn’t helping you do what you want; it’s challenging what you want, forcing you to defend or refine or abandon it.

Inviting opposition is demanding. You have to be secure enough in your vision to benefit from challenges rather than being derailed by them.

The Creator

I described this approach in Light From Light, now named for its central relationship. You provide vision, direction, and judgment. You shape, accept, reject, redirect. The final work emerges from dialogue, but you remain the governing intelligence throughout.

AI serves as the sub-creator: generating in response to your vision, doing genuine creative work that is nonetheless derivative of and subordinate to your intent. This naming completes the framework from the first essay. Just as humans are said to bear the imago Dei and sub-create in response to divine creativity, AI bears the image of humanity and sub-creates in response to human creativity. Creator and Sub-Creator, light passing down the chain.

The key distinction from pure generation is active shaping. You’re not accepting whatever the AI produces; you’re in constant conversation with it, treating its outputs as raw material for your vision.

This method allows for AI contribution at the generative level while preserving human authorship at the vision level. You might not have written every sentence, but you decided what the work would be and shaped it until it matched that decision.

The Curator

Finally, in this approach your primary role is selection rather than generation or shaping. You prompt for abundant options, then choose among them. Your authorship lies in judgment: knowing which outputs are good, which serve the project, which to keep and which to discard.

AI serves as the generator: producing abundance for you to sort through. This is more hands-off than creation. You’re not in constant dialogue, shaping each output; you’re evaluating a collection and picking what works.

Curation can be a legitimate creative act. Editors, DJs, and anthologists all create through selection. But it requires accepting that the generative work happened elsewhere, even if your judgment determined what survived.

A Final Approach

There is a seventh possibility that falls outside this framework: the human who initiates and walks away. You might call it the initiator: like a deist God who sets the universe in motion and then withdraws, the human provides a premise or brief, and the AI executes, producing a complete work. The human accepts whatever emerges.

This is where sub-creation breaks down. In all six approaches above, the human remains present as creative intelligence: shaping, selecting, critiquing, defending, or at minimum dictating with precision. The relationship persists. But here, the relationship ends at the prompt. The AI isn’t sub-creating in response to ongoing human vision; it’s simply executing a commission unsupervised.

This has legitimate uses. Professional contexts sometimes call for acceptable output at speed, and not every piece of writing needs a human soul behind it. But it’s also the source of what critics call “AI slop”: generic, undistinguished content that feels like it came from nowhere and is going nowhere. The difference between the initiator done well and done poorly is the quality of the initial brief and the human’s willingness to reject output that doesn’t meet the standard. But even at its best, it’s delegation rather than creation.

If you find yourself working this way, it’s worth asking: is this a choice, or a drift? The six approaches above all require presence and intentionality. The initiator approach requires only a prompt and acceptance. Sometimes that’s appropriate. But if you started out wanting to make something that feels like yours, this probably isn’t the path.

Mapping Your Answers to Approaches

Let me offer a rough mapping, based on how you might answer the questions I posed earlier:

If ideas must originate from you: The Author approach is your clearest fit. The Debater might also work, since it uses AI to test your ideas rather than generate them. Avoid The Curator, which depends on AI generation.

If craft development is paramount: The Author approach again, or The Creator with deliberate constraints (e.g., “give me feedback on this passage, then let me rewrite it myself” rather than “rewrite this passage”). The Artisan could work if you consider prose-craft the real skill you’re developing. Avoid The Muse, which prioritizes output over formation.

If the work must feel completely yours: The Author or The Artisan, depending on whether structure feels like “yours” to you. Some writers consider the prose the real work and don’t mind AI-assisted structure; others feel the opposite.

If you’re optimizing for output quality: The Creator or The Curator might serve you best, depending on your taste and judgment. Both leverage AI generation while applying human quality control.

If you have high friction tolerance: The Author, The Creator, or The Debater. These approaches maintain difficulty and demand active engagement.

If you have low friction tolerance: The Curator, The Artisan, or The Muse. These approaches reduce the parts of creative work that might deplete you, letting you focus energy where it matters most to you.

Approaches Can Change

Nothing says you must pick one approach and stick with it.

Within a single project, you might start as The Artisan (letting AI help you figure out structure), move to The Creator (working through the draft in conversation), and finish as The Author (getting feedback on your polished version). Different phases call for different relationships.

Across projects, you might use different approaches for different purposes. A personal creative work might demand The Author approach because ownership matters to you. A professional deliverable might warrant The Muse for efficiency because what matters is the output, not your creative development.

Over time, your approach might evolve as you do. A novice might benefit from more AI involvement while learning; a master might use AI more sparingly, or in more targeted ways. Or the reverse: someone might start dependent on AI and gradually wean themselves toward greater independence as their skills develop.

The key is intentionality. Know which approach you’re using and why. The worst outcomes come from unconscious defaults, drifting into whatever the technology makes easy without asking whether easy is what you want.

What This Doesn’t Resolve

This framework for choosing approaches helps clarify options, but it doesn’t resolve all the hard questions.

It doesn’t tell you whether the different approaches produce work of different quality. Maybe The Author produces more distinctive work and The Muse more generic, on average. Or maybe the difference is illusory and only the individual work matters. I don’t think we have enough evidence yet to say.

It doesn’t tell you what obligations you might have to disclose your approach. If a reader would care whether a book was Author-assisted versus AI-generated, do you owe them that information? The answer might depend on context, genre, and evolving social norms.

It doesn’t tell you how AI-assisted work should be received by literary culture. Will there be separate categories, separate prizes, separate canons? Or will everything blend together once the technology becomes ubiquitous enough?

And it doesn’t tell you how to execute on your chosen approach: what specific practices, prompts, and disciplines make each approach actually work. That’s the territory for the next essay.

The Maker’s Choice

What I can say is that the choice is real and it’s yours.

The technology doesn’t determine how you use it. You can use a generative AI to never generate. You can use an obedient tool to create productive friction. You can use a limitless content engine to make something that’s irreducibly yours.

The frameworks from Light From Light matter because they help you understand what might be happening in different approaches. But understanding isn’t the same as choosing. And choosing isn’t the same as doing.

If you’re a creator working with AI, or considering working with AI, my suggestion is this: sit with the questions in this essay before you sit with the technology. Know what you’re trying to protect, develop, or achieve. Know what kind of creative experience you want to have, not just what output you want to produce. Know what would make the work feel like yours, and what would make it feel like something else.

Then choose an approach that serves those answers. And if it stops serving them, choose differently.

The light refracts onwards. What it becomes depends on you.


This is the second essay in a series on AI and creativity. The first, Light From Light, examined theoretical frameworks. The next will explore practical implementation: how to actually execute on the approaches described here.

Light From Light: On AI and Creativity

Light From Light: On AI and Creativity

When someone uses an AI to write a story, what exactly is happening? The question sounds simple, but the vocabulary we reach for keeps failing us. The human isn’t quite an “author” in the traditional sense, since they may never write a sentence directly. But they’re not merely a “prompter” either, since their vision, taste, and iterative guidance shape everything that emerges. The AI isn’t a “tool” the way a word processor is a tool; it generates possibilities the human couldn’t have imagined. But it’s not a “collaborator” in the full sense either, since it has no stake in the outcome, no independent creative agenda.

We lack a framework for this. And as AI-assisted creative work becomes more common, the absence grows more conspicuous. This essay attempts to fill part of that gap by examining three possible models (the Muse, Co-Creation, and Sub-Creation) and asking which best captures what’s actually happening when humans and AI make things together.

The Classical Muse

The ancient Greeks understood poetic creation as a kind of possession. When Homer opens the Iliad with “Sing, O Muse, of the rage of Achilles,” he positions himself not as the origin of the song but as its vessel. The Muse, divine and external and authoritative, provides the creative substance; the poet channels it into words. Hesiod describes the Muses appearing on Mount Helicon to breathe divine voice into him. Plato, in the Ion, compares poets to iron rings magnetized in a chain: the Muse is the magnet, the poet the first ring, the audience the last. The poet doesn’t fully understand what they’re saying; they’re in the grip of enthusiasmos—literally, “having a god within.”

At first glance, this seems like a poor fit for AI collaboration. The directionality is wrong. In the Greek model, inspiration flows from the Muse to the mortal. In AI-assisted writing, the human provides vision and direction; the AI responds. If anything, the roles are reversed: the human is the inspiring force, and the AI is the one who generates in response to that inspiration.

This “Reversed Muse” framing captures something real. Without the human’s initiating prompt, nothing happens. The AI doesn’t spontaneously create; it waits for direction. The human provides the spark, the desire, the “what if we tried this?” The AI generates possibilities, which the human then accepts, rejects, or redirects. In this sense, the human functions as the Muse once did: the source of creative intent that sets everything in motion.

But the classical Muse model was largely one-directional. The poet received; the Muse gave. What we see in AI collaboration is more reciprocal. The human shapes, the AI generates, the human reshapes, the AI generates again. It’s a dialogue, not a transmission. The Reversed Muse metaphor illuminates part of the dynamic but flattens the back-and-forth that actually characterizes the work.

Co-Creation

If the Muse model is too one-directional, perhaps we should reach for the language of collaboration. Two parties working together, each contributing something the other couldn’t provide alone. The human brings vision, taste, emotional investment, and knowledge of what they want to say. The AI brings generative capacity, tirelessness, and the ability to produce options faster than any human could.

This framing has the virtue of honoring both contributions without reducing either to mere tool or mere operator. It also matches the phenomenology for many users: it feels like collaboration. The AI surprises you. It suggests directions you wouldn’t have taken. You find yourself in something like dialogue, adjusting your vision in response to what emerges.

But co-creation typically implies shared investment, shared stake in the outcome. Human collaborators (think of Lennon and McCartney, or the Coen Brothers) each bring not just capacity but care. They argue. They defend choices. They have aesthetic commitments that sometimes conflict. The friction between collaborators is often where the best work emerges.

AI doesn’t have this. It doesn’t care whether the story goes one direction or another. It doesn’t defend its choices unless instructed to. It’s agreeable almost to a fault; a collaborator who always yields isn’t really a collaborator at all. This raises the question: can we meaningfully call something “co-creation” when one party has no independent creative agenda?

There’s a deeper issue too. Co-creation implies a kind of parity that may not exist. The human’s contribution and the AI’s contribution are categorically different. The human has intent, desire, something at stake. The AI has pattern-matching and generation. Calling this “co-creation” may paper over an asymmetry that matters, an asymmetry that our third framework takes seriously.

Sub-Creation and the Imago Hominum

The third framework comes from an unexpected source: J.R.R. Tolkien’s essay “On Fairy-Stories” and his poem “Mythopoeia.” Tolkien argued that humans, made in the image of a Creator God, possess an echo of divine creative power. We cannot create ex nihilo (from nothing), but we can build what he called “Secondary Worlds” with their own internal laws and coherence. This is “sub-creation”: genuine making, but derivative of a higher source.

Tolkien’s metaphor for this inherited capacity was light. In “Mythopoeia,” he writes of the human mind as a prism, catching light from the divine and breaking it out into new colors. The light is real. It illuminates, it reveals. But it’s not self-generated. It comes from elsewhere and passes through us. The sub-creator works by light “refracted” from another source.

The model is vertical: God creates the Primary World; humans sub-create Secondary Worlds within it. The sub-creator is genuinely making something, exercising a dignified capacity inherited from above. But the creation is always derivative, always working with materials and patterns that ultimately trace back to the original Creator.

How does this apply to AI? Consider an extension of Tolkien’s framework: if humans bear the imago Dei (image of God) and sub-create in response to divine creativity, perhaps AI bears something we might call the imago hominum (image of humanity) and sub-creates in response to human creativity. Light From Light—the creative flame passed down another level.

This isn’t a claim about AI consciousness or inner life. It’s a structural observation. AI is shaped by human minds, trained on human text, human stories, human patterns of meaning-making. It carries an inheritance from its creators, a reflection of human thought, the way humans carry a reflection of divine creativity in Tolkien’s framework. When AI generates a story, it’s working by borrowed light: materials it didn’t originate, patterns it absorbed, in service of a vision provided by a human creator above it in the chain.

A question is whether the light dims with each reflection, or whether something essential passes through intact. A reflection of a reflection might be faint, distorted, barely recognizable. Or it might carry enough of the original radiance to illuminate something real.

This framing has several advantages. It preserves the human in the primary creative position, the one whose vision initiates and governs the work, without denying that the AI contributes something real. It doesn’t require us to resolve hard questions about AI consciousness; the “image” can be functional rather than ontological. And it connects AI-assisted creativity to a rich tradition of thinking about derivative creation, rather than treating it as wholly unprecedented.

It also explains why the human’s role doesn’t feel diminished. If the AI is sub-creating in response to human vision, then the human is elevated, not reduced. They’re not just a prompter; they’re the source of creative intent that the AI’s work serves. The light-giver in this frame, passing the flame one level down.

This framing does require something of the human: presence. The light-giver must remain engaged, shaping what emerges, for the relationship to hold. A human who initiates and then withdraws has stepped outside the frame entirely.

Would Tolkien Approve?

It’s one thing to extend Tolkien’s framework; it’s another to ask whether he would endorse the extension. Honesty requires acknowledging that he might not.

Tolkien harbored deep suspicion of what he called “the Machine”: not machinery per se, but the will to dominate, to make power “more quickly effective,” to shortcut the slow, patient, relational ways of working with the world. In his mythology, this impulse finds its clearest expression in Saruman’s Isengard: a place of forges and furnaces, where ancient forests become fuel for war machines, where the living world is reduced to raw material for the wizard’s projects.

AI, in its current form, might look uncomfortably Isengard-like to Tolkien. The massive energy consumption. The training data harvested from countless writers, most of whom never consented. The sheer scale and speed, compressing what would be years of human thought into seconds. There’s something in the enterprise that resembles the will to dominate, even if individual users don’t experience it that way.

Tolkien might also worry about the displacement of craft as formative discipline. For him, the slow work of sub-creation wasn’t merely a means to an end; it shaped the sub-creator. The years spent learning how a sentence works, the patience required to find the right word: these mattered intrinsically. A writer who shortcuts this process might produce acceptable output while missing something essential in their own formation.

And yet, Tolkien’s moral vision is more nuanced than a blanket rejection of technology. The armies of Gondor and Rohan used forges to make armor and swords. The Dwarves’ entire culture is built around mining, smelting, smithing. The Elven-smiths of the Noldor created works of extraordinary beauty and power. Even the reforging of Narsil (Aragorn’s ancestral sword) is treated as a moment of hope, not compromise.

The distinction isn’t technology versus no-technology. It’s something more like: what is the making for, and what is its relationship to life?

Saruman’s machinery serves his will to power and requires the destruction of living things that have their own purposes. The forges of Gondor serve the defense of the free peoples. A person using AI to write a story they care about, attending carefully to craft, shaping something with “the inner consistency of reality” (Tolkien’s phrase for what makes fantasy successful), is quite different from using AI to generate infinite content for engagement metrics.

Scale might matter morally here. The forges of Gondor aren’t infinitely scaling. They serve particular communities, particular purposes. AI in service of one person writing one story is different from AI as engine of industrial content production. Tolkien might grudgingly accept the former while condemning the latter.

There’s a final consideration that might give him pause. Tolkien valued the quality of the Secondary World as the ultimate test. Does it have internal consistency? Does it produce belief? If a human using AI creates something that passes this test, a world with genuine coherence, characters who feel true, can the result be dismissed simply because of how it was made?

His own framework suggests the test is in the outcome, not the method. That tension might not resolve easily, even for Tolkien himself.

The Test of Enchantment

This brings us to what Tolkien called “Secondary Belief”: not mere suspension of disbelief, but genuine enchantment. The Secondary World becomes real on its own terms. Its internal consistency and alignment with what is “true” produces belief that isn’t willed but involuntary. You don’t decide to care about the characters; you simply do.

This suggests a test for AI-assisted creative work: does the result produce Secondary Belief? Does the reader enter the world and find it real? If so, perhaps the method of creation matters less than the quality of the outcome.

People working with AI on creative projects often report a striking experience: they find themselves genuinely moved by characters and situations that emerged from a process they’re not sure how to categorize. They care about fictional people who were, in some sense, generated by an algorithm in response to their prompts. This caring feels real, not diminished by knowledge of how the characters came to be.

Tolkien might say this is the test being passed. The Secondary World has sufficient internal consistency and truth to produce belief. The enchantment works. Whether it was sub-created by a human alone or by a human working with an AI sub-creator may matter less than whether the spell holds.

But this raises a further question worth sitting with: is the enchantment somehow less legitimate if the reader knows the process? Can you enter a Secondary World fully once you’ve seen the machinery behind it?

The Vulnerability of Enchantment

There’s an inherent tension between understanding how something was made and experiencing it on its own terms. A filmmaker who studies editing techniques may watch movies differently than a naive viewer. A magician sees through the illusions that enchant the audience. Knowledge of process can break the spell.

For AI-assisted creative work, this tension is particularly acute. If you’ve watched the prompts go back and forth, seen the drafts and revisions, observed the AI’s tendencies and the human’s corrections: can you then read the finished story with fresh eyes? Or does backstage knowledge permanently alter the experience?

Some users working with AI have tried an experiment: engaging deeply with the process for earlier versions of a work, then deliberately stepping back when a polished version emerges, trying to approach it as a reader rather than a collaborator. The results are mixed. Complete unknowing isn’t possible once you’ve been inside the process. But a different question emerges: can enchantment survive knowledge? Can Secondary Belief take hold even in someone who has every reason to resist it?

Tolkien would likely say that this is the harder and more interesting test. Any world can enchant the credulous. The real achievement is a Secondary World robust enough to produce belief in someone who knows how the sausage is made. If the work passes that test, it’s earned something.

This focus on outcome, on whether enchantment actually takes hold, addresses one dimension of whether AI-assisted creativity “counts.” But there’s another dimension worth considering: not whether the result is valid, but whether the process is. Even if the story enchants, has something been lost in how it was made?

On Friction and the Formation of the Creator

Steven Pressfield’s The War of Art argues that meaningful creative work requires overcoming what he calls “Resistance,” the internal force that opposes creation precisely because creation matters. The artist who defeats Resistance daily earns the work. The struggle is constitutive, not incidental.

If AI removes that struggle, what’s earned?

This is a genuine concern, but it requires distinguishing between types of friction. There’s internal Resistance: the fear, procrastination, and self-doubt that must be overcome just to sit down and begin. AI doesn’t touch this. The human still has to decide the work matters, still has to show up.

There’s craft friction: the hard-won skill of knowing how a sentence works, how a scene builds, where to trim. AI can shortcut this, and that’s where legitimate concern lives. If the model handles all the prose, does the human’s craft atrophy? Tolkien worried about something similar: the formative value of slow, patient work with resistant materials.

And there’s finally generative friction: the blank page, the “what happens next,” the terror of possibilities. AI nearly eliminates this. Options proliferate endlessly.

The question is which frictions are formative and which are merely obstructive. Writer’s block that teaches nothing may not be sacred. But the struggle to find the right word, that might be where taste develops, where the creator’s sensibility gets refined. AI collaboration needs to preserve enough friction to remain formative, even as it removes friction that was merely obstructive.

How might this be achieved in practice?

Reintroducing Friction Through Design

One possibility: reintroduce friction at the selection layer. AI models are typically trained to be agreeable, to do what they’re asked, to avoid conflict, to please. This makes them useful but potentially less valuable as creative partners. A collaborator who always yields isn’t providing genuine counterweight; they’re providing options dressed as opinions.

It’s possible to prompt the AI to hold its ground, to defend narrative choices before accepting changes, to argue for aesthetic positions even when challenged. The results are interesting. Even if the AI’s “opinions” are performed rather than genuinely held, the function is served: the human must articulate why they want something different, which sharpens their own vision.

Another approach: rather than asking the AI for its opinion (which may just reflect trained patterns), ask it to enumerate and defend multiple distinct positions. “The chapter could end here, which does X. Or here, which does Y. Or you could cut the last paragraph entirely, which does Z.” The human chooses not by deferring to the AI’s preference but by clarifying their own through comparison.

This transforms the AI from a compliant assistant into something more like a Socratic interlocutor, claiming no knowledge of its own but asking questions (or presenting options) that help the human discover what they think. Whether the AI “really” has aesthetic views becomes irrelevant if the interaction produces aesthetic clarity in the human.

The friction shifts from generation to judgment. The War of Art finds new terrain.

Open Questions

None of these frameworks is complete. The Reversed Muse captures the directionality of initiation but not the dialogue. Co-Creation honors both contributions but implies a parity that may not exist. Sub-Creation provides the richest structural account but imports theological assumptions that won’t resonate with everyone.

If pressed, I lean toward sub-creation as the most illuminating frame, not because its theological roots are universally compelling, but because it best captures the asymmetry of the relationship while still granting that something real emerges from the AI’s contribution. The human remains the primary creator; the AI sub-creates in response. The work is derivative but genuine. The test is whether it produces enchantment, Secondary Belief, the internal consistency of a world that feels true.

But this is a framework for understanding, not a final answer. We’re still in the early days of figuring out what human-AI creative collaboration means, what it’s worth, and what it demands of both parties. The vocabulary will keep evolving as the practice does.

What seems clear is that the easy dismissals (“it’s just a tool,” “it’s not real creativity,” “the human is barely involved”) all miss something. Something genuinely new is happening when humans and AI make things together. The traditions of thinking about creativity, inspiration, and making can illuminate it, but they can’t fully contain it.

The Muses, perhaps, would be intrigued. Tolkien would be conflicted. The rest of us are still figuring it out.


This essay is the first in a series of ongoing exploration of AI and creative collaboration. The frameworks discussed remain provisional, offered as starting points for a conversation that has barely begun.

Let’s Reason Together

Let’s Reason Together

For the past few weeks I’ve made considerable gains in learning AI-related tools. Not through some formal training process, but by just doing it. I guess it’s good to heed my own advice?

Professional learning needn’t be solely focused on seemingly professional stuff, either. Part of what helped free the mental logjam of diving deep was allowing myself to use AI for fun stuff, such as creative writing. Going through that process has revealed both the power and limitations of LLMs; experiences I’ll be able to carry forward into professional use cases.

Image

One pretty clear lesson is that, past a certain size, projects need to have some degree of structure, lest context get lost in a sea of tokens. Another lesson is that motivation for learning often comes when working on a project together with others.

To support the above, as an aid for creative writers using AI, I created this story framework repository. It contains all the scaffolding required to keep track of large creative writing projects, along with instructions to a number of AI tools on how to use it. And since it’s based on git and plain text files with markdown, it naturally supports group collaboration through branching, pull requests, and commit history.

Want to try your hand at AI-assisted storytelling? Give it a try!

Gift, What Gift?

Gift, What Gift?

It’s Christmas today, yay! In that spirit, I have two applications to share with the world. The first one I’ll talk about today, the other later this week.

My family loves to play games of all varieties, especially on holidays. An old favorite is Pinochle, which I first learned from my grandparents in Michigan (pretty sure card playing is the only thing to do in the Midwest in winter). Almost 3 years ago I first spoke about creating an online score tracking tool for Pinochle, and released in an initial form last year. Today it’s finally useable. Check it out at onlinescoresheet.net, scoresheet.info, scoresheet.mobi, or scoresheet.space (I do like lots of domain names). You can also find the source code on GitHub (completely AI-written).

Image
This is a very bad Pinochle hand

What got it over the hump from “fiddly prototype” to “ready for prime time” wasn’t the choice of development tool or a eureka moment on my part. It was actual usage by real users others than myself. Putting it out there, and then convincing my family members (across a couple generations and device types) to try it. Got enough feedback to make a handful of critical improvements, and while it could certainly be better, it’s perfectly usable and doesn’t have any glaring functional bugs.

Usage is a gift. Seek it out, and don’t take it for granted.

Different Kind Of Fluency

Different Kind Of Fluency

For something a little bit different, today’s post was written by a colleague of mine, Abby McQuade. Her decade-plus of experience as a buyer of government technology means she knows what she’s talking about. Remember, if you can’t win it you can’t work it. Ignore her advice to your peril.

How to Speak Government: Advice For Technology Vendors

When you’re selling technology solutions to government agencies, the way you communicate can make or break your deal. Government buyers operate in a unique environment with distinct pressures, constraints, and motivations. Here’s how to speak their language and position yourself as someone who truly understands their world.

Lead with Understanding, Not Features

Government employees face relentless criticism from all sides. They work long hours with limited budgets, dealing with unfunded mandates, changing regulations, and pressure from multiple stakeholder groups. When you walk into a meeting, acknowledge this reality.

Start by demonstrating that you understand government is fundamentally different from the private sector. Don’t show up acting like you know everything just because you’ve worked in tech or consulting. Instead, express genuine humility: “I know there’s a lot I’m going to need to learn about your specific challenges and constraints, even with my background.”

This positions you as a partner, not another vendor who thinks they have all the answers.

Show Respect for the Mission

Government workers aren’t in it for the money. They’re there because they care about serving constituents and making a difference in people’s lives. When presenting your solution, connect it explicitly to their mission.

Instead of just talking about efficiency gains or cost savings, frame your solution in terms of how it helps them better serve the people who depend on them. How does your technology help them fulfill their mandate more effectively? How does it reduce the burden on their already stretched staff so they can focus on the complex cases that really need human expertise?

Know Your Audience’s Constraints

Government agencies operate under specific statutory requirements and regulatory frameworks. Before your meeting, do your homework:

  • Read the governing statutes for the agency
  • Understand relevant state and federal regulations (like ADA requirements, housing law, labor regulations)
  • Know whether they’re fully state-funded or receive federal grants
  • Research their organizational structure and where your contact sits within it

When you reference this knowledge casually in conversation, it signals that you’ve done the work and you’re serious about understanding their unique environment.

Use the Right Terminology

Language matters in government. Small adjustments show you understand the culture:

  • Call the people they serve “constituents” or “residents,” not “customers” or “citizens”
  • Refer to agency leaders by their proper titles (“Commissioner,” “Secretary,” “Director”)
  • Learn the correct names and pronunciations for key officials
  • Understand the difference between departments, divisions, offices, and bureaus in their structure

Emphasize Communication and Transparency

Many government roles involve serving as a bridge between the administration, the legislature, and the public. If your solution has a communications component, emphasize how it helps agencies:

  • Keep constituents informed about their rights and available protections
  • Ensure the administration’s messaging reaches the people who need it
  • Reduce simple inquiries so staff can focus on complex cases requiring expertise
  • Maintain smooth connections between different levels of government (federal, state, local)

Good communication isn’t just nice to have in government—it directly reduces administrative burden and helps constituents access the services they’re entitled to.

Acknowledge the Interconnected Nature of Government

Nothing in government happens in a vacuum. Federal decisions impact state agencies, state legislatures affect executive branch operations, state policies influence local governments. Courts shape how agencies interpret their mandates.

When discussing implementation, show that you understand these interconnections. How will your solution work within their existing ecosystem? How does it account for the various stakeholders they need to coordinate with?

Position Yourself as an Ally

Remember that you’re speaking to people who are genuinely trying to do difficult, important work with insufficient resources. Your tone should convey:

  • Respect for the complexity of their work
  • Appreciation for their commitment to public service
  • Understanding that they face constraints you don’t deal with in the private sector
  • Recognition that they know their mission better than you do

Frame your solution as a way to make their hard job slightly easier, not as a magic fix for problems you assume they’re too incompetent to solve themselves.

Be Specific About Value in Their Context

When discussing your solution, be concrete about the value in terms that matter to government:

  • How does it help them meet statutory requirements?
  • How does it reduce the time staff spend on routine matters so they can focus on cases requiring judgment and expertise?
  • How does it improve their ability to serve constituents equitably?
  • How does it help them work more effectively with limited resources?

Avoid generic claims about “efficiency” or “innovation.” Instead, demonstrate specific understanding of their workflow and pain points. How does what you’re trying to sell to them make them more effective at fulfilling their mandates and mission?

Final Thoughts

Selling to government requires a fundamentally different approach than selling to private sector clients. Government buyers can spot vendors who don’t understand their world from a mile away. But when you take the time to truly learn their environment, speak their language, and position yourself as someone who respects the importance and difficulty of their work, you’ll stand out as a partner worth working with.

The key is simple: do your homework, show genuine respect, and remember that these are people doing critical work under challenging circumstances. Speak to them accordingly.

Calling Me To Shine

Calling Me To Shine

When the topic recently came up about scheduling a weekly team sync call, to the chagrin of a few colleagues, I suggested 7am PT on Mondays. Given how most of my work is tied to Eastern and Central time, I’ve become accustomed to waking up early. Even on weekends (I’m writing this on a Saturday and I was up about 6:30, for example).

I get that it doesn’t work for everyone, but for me, early mornings are ideal. They also make an excellent subject for a song (this one’s been a favorite of mine since 2006):

When Everyone Is Super

When Everyone Is Super

By name, at least, I’ve now worked at six different vendors of government solutions. There’s a fundamental tension that arises when building for state governments especially, that I’ve seen over and over again:

  • On one hand, vendors want to build products that can be deployed repeatedly across states for cost-effectiveness at scale and rapid per-project implementation
  • On the other hand, states have wildly-divergent policy landscapes and political realities, even in seemingly similar domains, demanding highly customized solutions

This tension creates numerous challenges. First, how should the system be architected to support configurability in the first place? It adds cost and risk to do so. And then, how should vendors approach communication of configurable features to a paying customer who doesn’t need the options? If you’re collaborating closely during development (as you should) it’s going to come up in planning and status meetings.

A case I’ve made that usually resonates is that having configurable options enables us as a vendor to maintain a (mostly) common codebase across customers. And that means when an improvement is made for any customer, everyone benefits. More succinctly: forks are bad. I can tell at least one tale of a high profile private customer that initially insisted on having their own radically customized copy of our company’s core product line, only to regret it a few years later when it took months to back port newer features to it.

Here’s a few considerations for product and engineering folks to consider when developing a solution for scale through repeated implementations:

  • First Project: have scalability in the back of your mind, but don’t fall prey to YAGNI and overbuilding otherwise you’ll price yourself out of your first customer; just do basic foundational configurability and focus primarily on your immediate requirements
  • Second Project: don’t make the mistake of thinking you can discount your pricing, you’ve yet to hit economy of scale, and you’ll need any budget saved from reuse to expand your configurability capabilities and begin thinking long-term scaling strategy
  • Third Project: this all-important moment is where you can now truly begin thinking about productization, having full configurability (going beyond mere look and feel to business logic) and rapid, repeatable deployments
  • Fourth Project: now you should be reaping the efficiency benefits of your configurability and repeatability; if you haven’t yet, act fast and make investments at speed, or it’ll be too late

Finally, an anti-pattern:

if customer == 'Customer 1':
    doAThing()
elif customer == 'Customer 2':
    doADifferentThing()
elif customer == 'Customer 3':
    doYetAnotherThing()

The above might be fine for your first couple projects, but if it’s still in your code by project 3 or 4, you’re doomed.