<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[ITNEXT - Medium]]></title>
        <description><![CDATA[ITNEXT is a platform for IT developers &amp; software engineers to share knowledge, connect, collaborate, learn and experience next-gen technologies. - Medium]]></description>
        <link>https://itnext.io?source=rss----5b301f10ddcd---4</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sat, 11 Apr 2026 23:11:05 GMT</lastBuildDate>
        <atom:link href="https://itnext.io/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Flowcharts that require time travel to execute]]></title>
            <link>https://itnext.io/flowcharts-that-require-time-travel-to-execute-c27076333c92?source=rss----5b301f10ddcd---4</link>
            <guid isPermaLink="false">https://medium.com/p/c27076333c92</guid>
            <category><![CDATA[workflow]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[software-architecture]]></category>
            <dc:creator><![CDATA[Ossi Galkin]]></dc:creator>
            <pubDate>Sat, 11 Apr 2026 20:10:54 GMT</pubDate>
            <atom:updated>2026-04-11T20:10:53.192Z</atom:updated>
            <content:encoded><![CDATA[<p>A flowchart is not pseudocode. It is a programming language with arrows instead of braces, and it obeys exactly the same structural rules. I found this out the hard way building integrations on Frends, where I work as an architect.</p><p>You don’t find this out from documentation. You find it out when your workflow tool throws an error that makes no sense. You stare at a perfectly reasonable diagram for ten minutes, and then start randomly moving connectors until it compiles.</p><p>There is a better way to understand what is actually happening. And it involves time travel.</p><p>You want to test something with two unrelated if conditions. And do something if either of those fails. Since the conditions are unrelated, you draw two exclusive decisions, i.e., if statements. Everything connects. You save and compile. Then the tool throws an error.</p><p>The error message reads: Process compilation failed. All branches of the decision node Gateway_1e95hy7 must join at the same node.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*yAikjLCy1N3VPHVX6ecYpg.png" /><figcaption>The error message reads: Process compilation failed. All branches of the decision node Gateway_1e95hy7 must join at the same node. <em>Screenshot by the author.</em></figcaption></figure><p>The compiler refuses. The error message is gibberish. There are no loops anywhere in this Process.</p><h3>What the compiler is actually doing</h3><p>When your workflow tool saves a process, it doesn’t just wire up a flowchart. It generates structured C# code, real if/else blocks with a single entry point and a single exit point per branch. That’s not a quirk of the platform. That’s the fundamental constraint of structured programming.</p><p>For any decision gateway to compile cleanly, all its branches must converge at the same downstream node. The compiler needs to close the block it opened.</p><h3>The place where the compiler would need a time machine</h3><p>Visually, the Process looks reasonable. Two different paths leading to the same piece of work. That happens all the time in real business logic.</p><p>But look at the pseudo-code the compiler would have to generate:</p><pre>if (FirstCondition) <br>{<br>    if (SecondCondition) <br>    {<br>        //Return from Process, both conditions are passed <br>    } <br>    else <br>    {<br>        // Do the error handling in the code shape.<br>    }<br>} <br>else <br>{<br>  // We would need to jump into the else of the inner if.<br>  // But that branch was never entered.<br>}</pre><p>To execute the diagram as drawn, the runtime would need to jump from the second else block to the first else block, a code path it had already decided not to enter.</p><p>That’s time travel. You’d need to go back and enter a door you already walked past.</p><p>In old-school C, you could do this with a goto. Modern structured languages don&#39;t allow it. This goes back to Dijkstra <a href="https://en.wikipedia.org/wiki/Considered_harmful">arguing</a> for eliminating goto entirely in 1968, and the industry agreed. The constraint isn&#39;t new. It just got a flowchart UI on top.</p><h3>How to fix it</h3><p>The fix is straightforward. Sometimes reordering helps. Otherwise you need unambiguous branches, each ‘no’ path gets its own code block.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/992/1*JmnXhNarsAGmPeoyjBRz1A.png" /><figcaption>The fixed version. Each no-branch has its own separate code shape. Screenshot by the author.</figcaption></figure><h3>Why does the error message never explain this</h3><p>The validation rule that throws this error, in Frends and likely in any tool that compiles flowcharts to structured code, isn’t specifically about time travel. It guards all structural impossibilities, including infinite loops, unreachable branches, and yes, the kind of cross-branch jumping that would require a time machine. Unfortunately, the error message is the same for all cases.</p><h3>The bigger lesson</h3><p>The usual response is to start randomly moving connectors until it compiles. That works, eventually, but you don’t learn anything.</p><p>The real lesson is that a visual process diagram and executable code are not the same thing. A flowchart looks more flexible than code. Once it becomes executable, it isn’t. The constraint isn’t new. It just got a flowchart UI on top.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c27076333c92" width="1" height="1" alt=""><hr><p><a href="https://itnext.io/flowcharts-that-require-time-travel-to-execute-c27076333c92">Flowcharts that require time travel to execute</a> was originally published in <a href="https://itnext.io">ITNEXT</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Saved Prompts Are Dead. Agent Skills Are the Future]]></title>
            <link>https://itnext.io/saved-prompts-are-dead-agent-skills-are-the-future-7815f23f5183?source=rss----5b301f10ddcd---4</link>
            <guid isPermaLink="false">https://medium.com/p/7815f23f5183</guid>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[agentic-development]]></category>
            <category><![CDATA[coding]]></category>
            <dc:creator><![CDATA[Benjamin Cane]]></dc:creator>
            <pubDate>Sat, 11 Apr 2026 20:09:24 GMT</pubDate>
            <atom:updated>2026-04-11T20:09:22.812Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*n5z8ao_FdG16F5aQ" /><figcaption>Photo by <a href="https://unsplash.com/@onurbuz?utm_source=medium&amp;utm_medium=referral">Onur Buz</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>Saved prompts are dead. Agent Skills are the next step.</p><p>If you’ve been around for a while, you probably have a file full of bash one-liners.</p><p>Small scripts or commands you saved because they solved a problem you didn’t want to automate properly.</p><p>When coding agents arrived, prompts became the new one-liners.</p><p>Useful prompts were saved, reused, and eventually turned into “prompt files”, then slash commands like /do-something.</p><p>But that model has already evolved.</p><h3>⚙️ Agent Skills</h3><p>Agent Skills are the next iteration.</p><p>At a basic level, a skill looks a lot like a saved prompt: a directory with a markdown file.</p><p>What makes it different is how it’s used.</p><p>Skills include metadata like name and description, allowing agents to discover them.</p><p>Instead of explicitly calling a prompt every time, the agent can determine when to use a skill based on intent.</p><p>This is referred to as progressive disclosure:</p><ul><li>Agent loads skill metadata</li><li>Matches it to your task</li><li>Then loads and executes the full skill when needed</li></ul><p>You can still call skills directly (/, $, @), but you don&#39;t always have to.</p><h3>🧠 More Than Just Prompts</h3><p>The real differentiator is that skills aren’t just prompts.</p><p>They can include reference documentation, templates, and scripts.</p><p>This means you’re no longer just telling the agent what to do.</p><p>You’re giving it tools and context to execute and validate tasks.</p><p>For more complex workflows, it’s often easier to write a script and teach the agent how to use it than to encode everything in a prompt.</p><h3>⚠️ A Word of Caution</h3><p>This power comes with risk.</p><p>Skills can include executable logic and tell agents to perform tasks.</p><p>That means a shared skill can contain malicious or unsafe behavior.</p><p>Treat them like any script you install:</p><ul><li>Understand what they do</li><li>Know where they come from</li><li>Review before using (watch out for hidden text or obfuscated instructions)</li></ul><h3>🧠 Final Thoughts</h3><p>Agent skills are a meaningful step forward.</p><p>They let you codify workflows, preferences, and repeatable agent tasks in a way that agents can discover.</p><p>They’re a strong productivity accelerator and a powerful way to capture institutional knowledge in a form agents can actually use.</p><p>(More on that in the next post.)</p><p><em>Originally published at </em><a href="https://bencane.com/posts/2026-04-02/"><em>https://bencane.com</em></a><em> on April 2, 2026.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7815f23f5183" width="1" height="1" alt=""><hr><p><a href="https://itnext.io/saved-prompts-are-dead-agent-skills-are-the-future-7815f23f5183">Saved Prompts Are Dead. Agent Skills Are the Future</a> was originally published in <a href="https://itnext.io">ITNEXT</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[I Couldn’t Call My Mom. So I Built a Video Calling App.]]></title>
            <link>https://itnext.io/i-couldnt-call-my-mom-so-i-built-a-video-calling-app-e3ac0f41a6d2?source=rss----5b301f10ddcd---4</link>
            <guid isPermaLink="false">https://medium.com/p/e3ac0f41a6d2</guid>
            <category><![CDATA[webrtc]]></category>
            <category><![CDATA[startup]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[product-design]]></category>
            <category><![CDATA[indie-hacking]]></category>
            <dc:creator><![CDATA[Dima Doronin]]></dc:creator>
            <pubDate>Sat, 11 Apr 2026 15:26:12 GMT</pubDate>
            <atom:updated>2026-04-11T15:26:11.626Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*UpbdrdB4vZX1XnFU3tVhdw.png" /></figure><p>I couldn’t video call my mom. That’s what started all of this. She’s on Android, I’m on iPhone, and suddenly a simple call turned into a twenty-minute back-and-forth about apps, accounts, and “wait, can you just download this one?” It’s 2026. This shouldn’t be a puzzle.</p><p>So I started wondering: what if you could call anyone in the world with just a link? No installs. No accounts. No awkward tech support for your parents before you’ve even said hello.</p><p>I built it. It’s called <a href="https://just-call.app">JustCall (https://just-call.app</a>)</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*0lYZAdvJ6OAFGhfq.png" /><figcaption>JustCall MVP</figcaption></figure><h4><strong>The idea took ten minutes. The internet took longer.</strong></h4><p>The concept was simple enough to sketch on a napkin. A browser-based video call, peer-to-peer, zero friction on the receiving end. You send a link. They click it. You’re connected.</p><p>Then I opened the WebRTC documentation and reality set in.</p><p>International WebRTC is quietly brutal. NATs and firewalls snap peer-to-peer connections without warning. UDP gets throttled or blocked in certain regions. And when calls fail, they fail silently. No error. No explanation. Just nothing. You’re left wondering if it’s your internet, their internet, or some invisible wall sitting somewhere between two countries.</p><p>The tutorials make it look clean. The real world is not clean.</p><p>To get JustCall working across borders, behind corporate firewalls, on restrictive mobile networks, I had to go well past what most WebRTC guides cover. TURN server fallback for networks that block direct connections, robust handling across regions, a relay architecture that kicks in automatically when the direct path fails. The goal was simple: the call connects, and the person on the other end never has to know how.</p><p>The hardest constraint was also the most important one. It had to work for my mom. Not for a developer on a clean home network. For someone who just clicks a link and expects it to work, the same way a phone call works.</p><p>Two days later, I called her. Crystal clear. No dropped connection. More stable than FaceTime or WhatsApp on a good day.</p><h4><strong>Then strangers started using it.</strong></h4><p>I shared JustCall with friends and family first. They loved it. Then I posted it on LinkedIn. People from around the world started using it. Strangers. That felt incredible.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*aw1NIJjUKHYn21HW9NVS9Q.png" /><figcaption>CloudFlare Analytics</figcaption></figure><p>Then I looked at the data.</p><p>I had connected Amplitude to track behavior, and the numbers humbled me fast. Bounce rate: 70%. Three out of four people who landed on JustCall left almost immediately.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*j3suBaQvtElOaaQg" /><figcaption>Web Engagement in Amplitude</figcaption></figure><p>I sat with that number for a while.</p><p>The hard truth is I understood why. When you land on an app and someone asks you to start a video call with no context, no explanation, no reason to trust it, it’s unsettling. It feels like opening a door you weren’t sure you should open. The product worked. The product just wasn’t communicating that it worked, or even what it was.</p><p>I had solved the wrong problem first. I spent two days making the call bulletproof and about ten minutes thinking about what a stranger would feel when they arrived.</p><h4><strong>Trust is a product feature.</strong></h4><p>I went back to basics. The design stays simple, that was intentional from the start. But this time I focused on one thing: clarity.</p><blockquote>What is this? Why should you trust it? How does it work?</blockquote><p>Three questions every new user silently asks before they decide to stay or leave. I hadn’t answered any of them. The app just stared back at them and waited.</p><p>Now I have answers built into the product itself. Not a wall of text, not a tutorial, just enough signal to tell someone: this is safe, this is simple, and this is worth one click.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Yjl361PE77_pIKcgGq6Vnw.png" /><figcaption>JustCall — after the redesign</figcaption></figure><p>The redesign is live. I’m watching the numbers. My bet is that trust, built into the product rather than bolted on as an afterthought, will show up in the data. We’ll see if I’m right.</p><p>Today is April 6, almost a week has past since the new redesign has been introduced. I see that the app is being used in every corner of the globe.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*wp8GLRWVPfFbPh0b" /></figure><p>I added a light theme mode based on the user system settings</p><p>Building something people actually use turns out to be two separate problems. First you make it work. Then you make people believe it works. I assumed the first one would take care of the second. It doesn’t.</p><p>If you’re building something early, I’d love to know: how do you think about trust in your product? It’s the thing nobody talks about in the tutorials, and it might be the thing that matters most.</p><p>And if you need a video call, try it yourself at <a href="https://just-call.app">just-call.app</a>. Feedback is always welcome.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e3ac0f41a6d2" width="1" height="1" alt=""><hr><p><a href="https://itnext.io/i-couldnt-call-my-mom-so-i-built-a-video-calling-app-e3ac0f41a6d2">I Couldn’t Call My Mom. So I Built a Video Calling App.</a> was originally published in <a href="https://itnext.io">ITNEXT</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to Implement SLOs for Your Kubernetes Services (Lessons Learned)]]></title>
            <description><![CDATA[<div class="medium-feed-item"><p class="medium-feed-snippet">Dealing with zero-traffic NaN, gauge-only exporters, Helm template wars, and automating it all with Sloth CLI and Python.</p><p class="medium-feed-link"><a href="https://itnext.io/how-to-implement-slos-for-your-kubernetes-services-lessons-learned-70b614848a23?source=rss----5b301f10ddcd---4">Continue reading on ITNEXT »</a></p></div>]]></description>
            <link>https://itnext.io/how-to-implement-slos-for-your-kubernetes-services-lessons-learned-70b614848a23?source=rss----5b301f10ddcd---4</link>
            <guid isPermaLink="false">https://medium.com/p/70b614848a23</guid>
            <category><![CDATA[kubernetes]]></category>
            <category><![CDATA[prometheus]]></category>
            <category><![CDATA[devops]]></category>
            <category><![CDATA[observability]]></category>
            <category><![CDATA[sre]]></category>
            <dc:creator><![CDATA[Matthieu Treussart]]></dc:creator>
            <pubDate>Sat, 11 Apr 2026 15:21:38 GMT</pubDate>
            <atom:updated>2026-04-11T15:21:36.783Z</atom:updated>
        </item>
        <item>
            <title><![CDATA[The 12-Factor App - 15 Years later. Does it Still Hold Up in 2026?]]></title>
            <link>https://itnext.io/the-12-factor-app-15-years-later-does-it-still-hold-up-in-2026-c8af494e8465?source=rss----5b301f10ddcd---4</link>
            <guid isPermaLink="false">https://medium.com/p/c8af494e8465</guid>
            <category><![CDATA[devops]]></category>
            <category><![CDATA[twelve-factor]]></category>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[software-architecture]]></category>
            <category><![CDATA[software-development]]></category>
            <dc:creator><![CDATA[Lukas Niessen]]></dc:creator>
            <pubDate>Sat, 11 Apr 2026 11:49:58 GMT</pubDate>
            <atom:updated>2026-04-11T11:49:57.000Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*szal7oxYFMpiOTr41TSZsw.jpeg" /></figure><p>The <a href="https://12factor.net/">Twelve-Factor App</a> methodology was created around 2011 at Heroku. It’s a set of twelve principles for building software-as-a-service applications that are portable, resilient, and easy to deploy to modern cloud platforms. Heroku was cloud-native before most people even used that term, so it’s no surprise these principles have aged remarkably well.</p><p>But it’s been 15 years. Cloud-native has gone from a buzzword to the default. Kubernetes runs everything. Serverless is mainstream. We’re building AI-powered applications with inference calls, agentic workflows, and vector databases. The world looks different.</p><p>So let’s go through all twelve factors, understand what each one actually says, and put it in the context of where we are right now.</p><h3>I. Codebase</h3><blockquote>One codebase tracked in revision control, many deploys</blockquote><p>(<a href="https://12factor.net/codebase">12factor.net/codebase</a>)</p><p>The idea: there is a one-to-one relationship between your codebase and your app. One codebase, tracked in Git (or whatever version control you use), and from that single codebase you produce deploys for different environments — development, staging, production.</p><p>This still holds up completely. But the picture has gotten a bit more nuanced. In 2011, the mental model was roughly:</p><pre>codebase → deploy</pre><p>Today it’s more like:</p><pre>codebase  ---→  artifact   ---→    deploy<br>  (Git)        (container       (prod, staging, dev)<br>                 or zip)</pre><p>The artifact step is important. You build a container image (or a zip file for serverless), and that artifact is what gets deployed. The same artifact goes to staging and production. This is what gives you confidence that what you tested is what you’re running.</p><p>For local development, the picture is slightly different. You usually don’t build a container for every code change — live-reload and hot-module replacement are the norm. That’s fine. The principle is about production deployments, and there it’s followed almost universally.</p><p>One thing worth mentioning in 2026: monorepos. Tools like Nx, Turborepo, and Bazel have made monorepos viable for large organizations. You might have dozens of services in a single repository. This doesn’t violate the codebase factor per se — each service still maps to its own build pipeline and artifact — but it does blur the “one codebase, one app” line a bit. As long as each app within the monorepo has its own deploy pipeline and artifact, you’re fine.</p><h3>II. Dependencies</h3><blockquote>Explicitly declare and isolate dependencies</blockquote><p>(<a href="https://12factor.net/dependencies">12factor.net/dependencies</a>)</p><p>The idea: never rely on implicit system-wide packages. Your app should declare all its dependencies explicitly (think package.json, requirements.txt, go.mod) and use some form of dependency isolation so it doesn’t leak into or depend on the broader system.</p><p>This has become second nature. Package managers handle declaration. Containers handle isolation. When your app runs in a Docker container, the container <em>is</em> the isolation boundary. Everything your app needs is inside that container, explicitly defined in the Dockerfile.</p><p>The original twelve-factor text says something like: “Twelve-factor apps do not rely on the implicit existence of any system tools. Examples include shelling out to ImageMagick or curl.” In a containerized world, this is less of a concern. If your app shells out to curl, and curl is in the container image, that’s fine. It ships with your artifact. It’s explicit.</p><p>In serverless environments like AWS Lambda, the execution environment is well-defined too. AWS provides specific runtimes with specific libraries. If your Lambda runs on the Python 3.12 runtime, you know exactly what’s available, and you can bundle anything else.</p><p>Where this factor gets more relevant again in 2026 is supply chain security. Declaring dependencies is great, but are those dependencies trustworthy? Tools like Dependabot, Snyk, and npm audit have become standard. Lock files (package-lock.json, poetry.lock) ensure reproducible builds. Software Bill of Materials (SBOM) is increasingly expected, sometimes even mandated by regulation. So the spirit of this factor - know exactly what your app depends on - is more important than ever, just for slightly different reasons than the original authors had in mind.</p><h3>III. Config</h3><blockquote>Store config in the environment</blockquote><p>(<a href="https://12factor.net/config">12factor.net/config</a>)</p><p>The idea: configuration that varies between deploys (database URLs, API keys, feature flags) should not be in the code. It should come from the environment. The same artifact, combined with different configuration, produces different deployments.</p><pre>artifact + configuration = deployment</pre><p>This is the fundamental equation. Your code doesn’t know or care whether it’s running in production or staging. The configuration tells it.</p><p>The factor specifically advocates for environment variables. This is perhaps where the original text shows its age the most. Environment variables work, but they’re not the only way, and sometimes not even the best way.</p><p>In Kubernetes, you typically use ConfigMaps and Secrets, which can be mounted as environment variables <em>or</em> as files. For sensitive configuration, there are good reasons to prefer files over environment variables: environment variables can leak into logs, crash dumps, or child processes. Tools like HashiCorp Vault or AWS Secrets Manager take this further by providing dynamic secrets with automatic rotation and fine-grained access control. Some setups use envelope encryption with KMS.</p><p>With the rise of GitOps, configuration often lives in a Git repository — just not the same one as your application code. This might seem to contradict “store config in the environment,” but it doesn’t really. The configuration is still separate from the application. The Git repo is the source of truth for what the configuration should be; the actual injection into the running process still happens at deploy time via environment variables, files, or a secrets manager.</p><p>The core principle — separate config from code, artifact plus config equals deployment — is as solid as ever. Just don’t get hung up on environment variables being the only mechanism.</p><h3>IV. Backing Services</h3><blockquote>Treat backing services as attached resources</blockquote><p>(<a href="https://12factor.net/backing-services">12factor.net/backing-services</a>)</p><p>The idea: your app should treat backing services — databases, message queues, caches, SMTP services, third-party APIs — as attached resources, accessed via a URL or connection string stored in configuration. There should be no distinction between local and third-party services. Swapping a local Postgres for Amazon RDS should require nothing more than a config change.</p><p>This is common practice today. In Kubernetes, it’s straightforward to configure either a local single-pod Redis for development or a cloud-managed Elasticache for production. The application code doesn’t change. Just the connection string.</p><p>An important example in 2026 is AI services. If your application calls an LLM for inference, that’s a backing service. Whether you’re calling OpenAI’s API, a self-hosted model on your own GPU cluster, or AWS Bedrock — the same principle applies. Your app should be able to swap between these via configuration. Don’t hardcode your AI provider. This matters because the AI landscape is moving fast. You might start with one provider and want to switch to another in three months. If you’ve treated it as an attached resource from the start, that switch is painless.</p><p>One nuance: the factor says it’s fine to use local filesystem or memory for things like caching, as long as the data is completely ephemeral and it doesn’t break any of the other factors. This still holds. But be careful: “ephemeral” needs to be truly ephemeral. If your container gets killed and restarted, that local data is gone. If your code can’t handle that, it’s not ephemeral enough.</p><h3>V. Build, Release, Run</h3><blockquote>Strictly separate build and run stages</blockquote><p>(<a href="https://12factor.net/build-release-run">12factor.net/build-release-run</a>)</p><p>The idea: the deployment process has three distinct stages. Build converts code into an executable artifact (compile, bundle dependencies, produce a container image). Release takes that artifact and combines it with configuration for a specific environment. Run executes the release in the target environment.</p><pre>Build   → artifact (container image, zip)<br>Release → artifact + configuration<br>Run     → deployment (running in prod, staging, etc.)</pre><p>These stages should be strictly separated. You don’t patch running code. You don’t build in production. Every release is immutable and has a unique identifier so you can roll back to any previous release.</p><p>This is almost impossible to violate today if you’re using any modern CI/CD pipeline with containers. Your CI builds the image. Your CD combines it with environment-specific config and deploys it. Nobody is SSH-ing into production to edit files (or at least, nobody should be).</p><p>I’d argue the only addition worth making here is that the concept of immutable artifacts deserves more emphasis than the original text gives it. The container image you built and tested is the exact same image that runs in production. Not a similar one. The exact same one. Same SHA. This eliminates entire categories of “it works on my machine” problems.</p><p>And with GitOps tools like ArgoCD or Flux watching your Git repository and automatically syncing your cluster, the release and run stages are even more clearly separated and automated than the original authors probably envisioned.</p><h3>VI. Processes</h3><blockquote>Execute the app as one or more stateless processes</blockquote><p>(<a href="https://12factor.net/processes">12factor.net/processes</a>)</p><p>The idea: your application runs as one or more stateless processes. These processes are stateless and share-nothing. Any data that needs to persist must be stored in a stateful backing service (a database, Redis, etc.).</p><p>This is a big one. Some concrete takeaways:</p><ul><li>No sticky sessions. Don’t store user sessions in process memory. Put them in Redis or a database. If a process dies, the session data shouldn’t die with it.</li><li>No local file storage for persistent data. Anything written to the local filesystem is ephemeral and will be gone when the process restarts or a new version deploys.</li><li>One container, one process, one concern. Don’t cram multiple services into a single container.</li></ul><p>This factor directly enables horizontal scaling (factor VIII) and disposability (factor IX). If your processes are truly stateless, you can spin up ten of them or kill five of them without any coordination.</p><p>In Kubernetes, the init container pattern and Helm chart hooks are useful for separating setup tasks (database migrations, cache warming) from the main application process. This keeps the main process clean and focused.</p><h3>VII. Port Binding</h3><blockquote>Export services via port binding</blockquote><p>(<a href="https://12factor.net/port-binding">12factor.net/port-binding</a>)</p><p>The idea: your app is completely self-contained and exports its service by binding to a port. It doesn’t rely on an external web server like Apache or Nginx being injected at runtime. The app itself listens on a port and serves requests.</p><p>This holds up for anything HTTP or TCP-based. Your Node.js server listens on port 3000. Your Spring Boot app listens on port 8080. In Kubernetes, the Service abstraction handles routing traffic to the right pods, but each pod is still self-contained and port-bound.</p><p>Where this factor has become less applicable is event-driven and serverless architectures. An AWS Lambda function doesn’t bind to a port. It’s invoked by a trigger — an API Gateway event, an SQS message, an S3 upload. The function processes the event and returns. No port binding involved. Same for WASM-based workloads on Kubernetes using things like SpinKube or Fermyon — the execution model is different.</p><p>This doesn’t mean the factor is wrong. It just means it was written for a world of long-running HTTP server processes. For that world, it’s still entirely correct. For event-driven systems, it’s simply not applicable.</p><h3>VIII. Concurrency</h3><blockquote>Scale out via the process model</blockquote><p>(<a href="https://12factor.net/concurrency">12factor.net/concurrency</a>)</p><p>The idea: your application should scale horizontally by running multiple processes. Need to handle more HTTP traffic? Run more web processes. Need to process more background jobs? Run more worker processes. Each process type handles a specific workload, and you scale each type independently.</p><p>The application itself should not manage its own process model (no internal thread pools trying to be a mini-OS). Leave process management to the operating system or, more realistically in 2026, to the orchestrator. Kubernetes, for example, handles scheduling, scaling, and lifecycle management. Your app just needs to be a well-behaved process that can be started, stopped, and replicated.</p><p>This factor is directly linked to factor VI (stateless processes) and factor IV (backing services). If your processes share nothing and externalize state, scaling out is trivial. Kubernetes Horizontal Pod Autoscaler (HPA) can spin up more replicas based on CPU, memory, or custom metrics. Serverless platforms scale to zero and back up automatically.</p><p>In 2026, autoscaling has become significantly more sophisticated. KEDA (Kubernetes Event-Driven Autoscaling) lets you scale based on queue depth, event stream lag, or any custom metric. Knative scales serverless workloads on Kubernetes. For AI workloads, GPU-aware autoscaling is an active area — scaling inference pods based on request queue depth or GPU utilization.</p><p>The principle is still just as true: design for horizontal scaling. Let the platform handle the actual scaling mechanics.</p><h3>IX. Disposability</h3><blockquote>Maximize robustness with fast startup and graceful shutdown</blockquote><p>(<a href="https://12factor.net/disposability">12factor.net/disposability</a>)</p><p>The idea: processes should be disposable. They can be started or stopped at a moment’s notice. This means fast startup times and graceful shutdown behavior. When a process receives a termination signal, it should finish what it’s doing (drain in-flight requests, finish the current job) and then exit cleanly.</p><p>This is the complement to factor VIII. You can only scale out and in freely if processes can be created and destroyed without drama.</p><p>In Kubernetes, this translates to several concrete practices:</p><ul><li>Handle SIGTERM. When Kubernetes wants to stop your pod, it sends SIGTERM. Your app should catch this and shut down gracefully — stop accepting new requests, finish in-flight ones, close database connections, then exit. If you don’t handle SIGTERM, Kubernetes will SIGKILL your process after a grace period (default 30 seconds). That’s a hard kill with no cleanup.</li><li>PreStop hooks. You can configure a PreStop hook to run a command before SIGTERM is sent. Useful for deregistering from service discovery or draining connections.</li><li>Readiness and liveness probes. Readiness probes tell Kubernetes when your app is ready to receive traffic. Liveness probes tell it when your app is stuck and needs to be restarted. Get these right and Kubernetes can route around unhealthy pods automatically.</li><li>PodDisruptionBudgets. These tell Kubernetes how many pods can be down simultaneously during voluntary disruptions (like node upgrades). Combined with proper rolling update configuration (maxSurge, maxUnavailable), this ensures zero-downtime deployments.</li></ul><p>Fast startup has become even more important in 2026 with scale-to-zero patterns. If your service scales down to zero pods during quiet periods and then needs to handle a sudden request, that first request is waiting for your app to start. This is the “cold start” problem. Technologies like GraalVM native images, which compile Java applications ahead of time, can bring startup from seconds to milliseconds. For serverless, provisioned concurrency (keeping some instances warm) is the common workaround.</p><p>The bottom line: nodes are cattle, not pets. Design your processes so they can be killed and restarted at any time without data loss or service disruption.</p><h3>X. Dev/Prod Parity</h3><blockquote>Keep development, staging, and production as similar as possible</blockquote><p>(<a href="https://12factor.net/dev-prod-parity">12factor.net/dev-prod-parity</a>)</p><p>The idea: minimize the gaps between development and production. The original text identifies three gaps: the time gap (code written weeks ago gets deployed), the personnel gap (developers write code, ops deploys it), and the tools gap (using SQLite in dev but Postgres in production).</p><p>This is as relevant as ever. Maybe more so. The tooling has gotten much better though.</p><p>Docker Compose lets you spin up a local environment that closely mirrors production — same databases, same message queues, same services.</p><p>Dev Containers (VS Code or other IDEs) give every developer an identical, reproducible development environment.</p><p>Testcontainers lets you write integration tests that spin up real databases, real message brokers, real services in containers. No more “works against the mock but fails against real Postgres.”</p><p>Localstack emulates AWS services locally, so you can develop against S3, SQS, DynamoDB, etc. without an AWS account.</p><p>Ephemeral environments (using tools like Terraform and Kubernetes namespaces) let you spin up a full copy of your production environment per pull request. This is probably the biggest evolution since the original twelve-factor text was written. Instead of a single shared staging environment where everyone’s changes collide, each PR gets its own isolated environment.</p><p>The time gap has shrunk dramatically with CI/CD. Code can go from commit to production in minutes. The personnel gap has largely closed thanks to DevOps culture — the people who write the code are involved in deploying and operating it. The tools gap is the remaining challenge, but the solutions above address it well.</p><p>One area where dev/prod parity is still tricky: data. You can mirror your infrastructure perfectly, but if your dev environment has 100 rows of test data and production has 100 million rows, you might miss performance issues entirely. Consider using anonymized production data snapshots for realistic testing.</p><h3>XI. Logs</h3><blockquote>Treat logs as event streams</blockquote><p>(<a href="https://12factor.net/logs">12factor.net/logs</a>)</p><p>The idea: your app should not concern itself with storing or routing logs. It should write to stdout and stderr as an unbuffered event stream. The execution environment captures that stream and routes it wherever it needs to go - a log aggregation service, a file, a terminal for local development.</p><p>Don’t write log files. Don’t ship logs from within your application. Don’t build your own log rotation. Just write to stdout and let the platform handle the rest.</p><p>In Kubernetes, container stdout/stderr is captured by the container runtime and made available via kubectl logs. From there, a log shipping agent (Fluentd, Fluent Bit, Vector) forwards logs to your aggregation system of choice - Elasticsearch, Loki, Datadog, whatever.</p><p>This factor holds up perfectly. The one significant gap in the original text is that it only talks about logs. Modern observability is built on three pillars: logs, metrics, and traces. The original twelve-factor methodology doesn’t mention metrics or distributed tracing at all, which makes sense given it was written in 2011 before these practices were widespread.</p><p>The same principle applies though. Your application should emit metrics and traces the same way it emits logs — as streams of data that the platform captures and routes. OpenTelemetry has become the standard for this in 2026. It provides a vendor-neutral SDK for instrumenting your applications with traces, metrics, and logs. And with OpenTelemetry’s zero-code instrumentation (auto-instrumentation agents), you can get a lot of observability without changing your application code at all.</p><p>So while the factor says “logs,” read it as “observability signals” and the principle still stands: emit them, don’t manage them.</p><h3>XII. Admin Processes</h3><blockquote>Run admin/management tasks as one-off processes</blockquote><p>(<a href="https://12factor.net/admin-processes">12factor.net/admin-processes</a>)</p><p>The idea: administrative tasks — database migrations, one-time scripts, console sessions for debugging — should run as one-off processes in an environment identical to the regular long-running processes of the app. They should use the same codebase and configuration. They should ship with the application code.</p><p>The goal is to eliminate synchronization issues. If your migration script uses a different version of the database library than your app, things will break in subtle ways. If it runs against a different configuration, it might target the wrong database.</p><p>In Kubernetes, this translates to running admin tasks as Jobs or init containers. Helm chart hooks (pre-install, pre-upgrade) can run database migrations before the new version of your app starts. The migration runs in the same container image as your application, ensuring identical dependencies and configuration.</p><p>In CI/CD pipelines, these tasks are often a step in the deployment pipeline itself. Migrations run after the new artifact is built but before traffic is routed to the new version.</p><p>This is straightforward and hasn’t changed much. The principle is solid: same code, same config, same environment. Whether it’s a migration, a data backfill, or a one-off cleanup script.</p><h3>Beyond the Twelve: Forward and Backward Compatibility</h3><p>There’s an important practice not addressed in the original twelve factors that deserves its own section: backward and forward compatibility.</p><p>In 2026, we expect deployments to be frequent and zero-downtime. That implies rolling updates or blue/green deployments. Even blue/green deployments in large distributed systems are rarely truly atomic. And deployment patterns like canary deployments require the ability to roll back at any point.</p><p>This means that for some period of time, version N and version N+1 of your application are running simultaneously. If those versions aren’t compatible with each other, you have a problem.</p><p>This applies to:</p><p>Database schemas. Don’t drop a column in the same release that stops using it. First deploy the version that stops using the column. Then, in a subsequent release, drop it. The same applies to adding columns — add them as nullable first, deploy the code that writes to them, and only then make them non-nullable if needed.</p><p>API contracts. If you’re adding a field to an API response, consumers should be able to handle its absence (since they might receive responses from both old and new instances during a rollout). If you’re removing a field, make sure no consumer depends on it before you remove it.</p><p>Cached data. If the structure of cached objects changes between versions, both versions might be reading from the same cache. Prefixing cache keys with a version identifier can help. Or you can design your deserialization to handle both old and new formats.</p><p>Event schemas. If you’re using event-driven architecture, producers and consumers might be running different versions. Events should be forward-compatible (new fields are optional, old consumers can ignore them).</p><p>The general pattern: expand first, then contract. Add the new thing, make sure everything works, then remove the old thing. Never do both in the same release.</p><p>This is especially important for applications where end users or other teams control when they upgrade. They might skip multiple minor versions, making backward compatibility even more critical.</p><h3>Wrapping Up</h3><p>15 years later, the Twelve-Factor App methodology holds up remarkably well. Most of the factors have become common practice to the point where people follow them without knowing they’re following a methodology. Containers and Kubernetes have made many of these principles the path of least resistance, which is the best thing that can happen to a set of best practices.</p><p>A few factors show their age in specific areas — port binding doesn’t map cleanly to serverless, the config factor is overly specific about environment variables, and the logs factor doesn’t mention the broader observability picture. But the underlying ideas remain sound.</p><p>If I were to summarize the entire methodology in one sentence, it would be this:</p><pre>immutable artifact + external configuration + stateless processes = deployable anywhere</pre><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c8af494e8465" width="1" height="1" alt=""><hr><p><a href="https://itnext.io/the-12-factor-app-15-years-later-does-it-still-hold-up-in-2026-c8af494e8465">The 12-Factor App - 15 Years later. Does it Still Hold Up in 2026?</a> was originally published in <a href="https://itnext.io">ITNEXT</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to Make Architecture Decisions: RFCs, ADRs, and Getting Everyone Aligned]]></title>
            <link>https://itnext.io/how-to-make-architecture-decisions-rfcs-adrs-and-getting-everyone-aligned-ab82e5384d2f?source=rss----5b301f10ddcd---4</link>
            <guid isPermaLink="false">https://medium.com/p/ab82e5384d2f</guid>
            <category><![CDATA[software-architecture]]></category>
            <category><![CDATA[system-design-interview]]></category>
            <category><![CDATA[software]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[software-engineering]]></category>
            <dc:creator><![CDATA[Lukas Niessen]]></dc:creator>
            <pubDate>Sat, 11 Apr 2026 11:49:04 GMT</pubDate>
            <atom:updated>2026-04-11T11:49:03.049Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*BAG8bdPHP9MQpxKIx77DtA.jpeg" /></figure><p>Making architecture decisions is one of those things that can go really well or really badly. I’ve been in both situations. I’ve seen decisions made in hallway conversations that caused months of rework. I’ve also seen beautifully documented decisions that nobody read, leading to the same outcome.</p><p>The thing is, architecture decisions are different from regular code decisions. They’re harder to reverse, they affect more people, and they often involve trade-offs that aren’t purely technical. You need buy-in. You need the right people in the room. And you need a process that doesn’t turn into endless meetings or bikeshedding.</p><p>Here’s the approach I’ve found works well, both in my consulting work and in teams I’ve led.</p><h3>The Core Process</h3><p>The flow is straightforward:</p><pre>┌─────────────────┐     ┌─────────────────┐     ┌─────────────────┐     ┌─────────────────┐<br>│                 │     │                 │     │                 │     │                 │<br>│   Write RFC     │────►│  Async Review   │────►│ Decision Meeting│────►│   Write ADR     │<br>│                 │     │  (Comments)     │     │                 │     │                 │<br>│                 │     │                 │     │                 │     │                 │<br>└─────────────────┘     └─────────────────┘     └─────────────────┘     └─────────────────┘<br>      1-2 days              2-3 days               30-60 min              Same day</pre><p>RFC (Request for Comments): A document that explains the problem, perhaps proposes solutions, and invites feedback. This is the async preparation phase.</p><p>Decision Meeting: A focused synchronous discussion where you make the actual decision. Everyone has already read the RFC and comments.</p><p>ADR (Architecture Decision Record): The final documentation of what was decided and why. This becomes part of your permanent record.</p><p>Let’s break down each step.</p><h3>Step 1: Write the RFC</h3><p>The RFC is where you do the heavy lifting. It forces you to think through the problem properly before bringing others in. Don’t skip this step — jumping straight to a meeting is how you get 2-hour discussions that go in circles.</p><h3>Where to Put It</h3><p>Confluence works well. So does Notion, Google Docs, or even a Markdown file in a dedicated repo. The key is that it’s somewhere everyone can access, comment on, and reference later. Pick a consistent location — you want people to know where to find RFCs without hunting.</p><p>I recommend creating a dedicated space or folder like /Architecture/RFCs/ with a naming convention:</p><pre>RFC-2026-001-Event-Driven-Architecture<br>RFC-2026-002-Database-Sharding-Strategy<br>RFC-2026-003-Authentication-Provider-Migration</pre><h3>RFC Structure</h3><p>Here’s an example template:</p><pre># RFC: [Title]</pre><pre>**Author:** [Your name]<br>**Date:** [Date]<br>**Status:** Draft | Under Review | Decided | Superseded<br>**Decision Deadline:** [Date - usually 3-5 days from creation]</pre><pre>## Summary<br>One paragraph. What is this about and why are we discussing it?</pre><pre>## Context<br>What&#39;s the current situation? What problem are we solving? <br>Why now? Include relevant constraints, requirements, and background.</pre><pre>## Priorities and Requirements (Ranked)</pre><pre>This is the most important part. List what actually matters for this decision, in order of importance. Be specific and quantifiable where possible.</pre><pre>1. **[Priority name]** - [Why this matters. What&#39;s the business or technical reason?]<br>2. **[Priority name]** - [Why this matters?]<br>3. **[Priority name]** - [Why this matters?]</pre><pre>Example:<br>1. **Cost** - We&#39;re operating at thin margins; any infrastructure cost increase directly impacts profitability<br>2. **Development velocity** - Our roadmap depends on shipping three features this quarter<br>3. **Operational complexity** - We have a small ops team; anything complex will create bottlenecks</pre><pre>Note: People often disagree on decisions because they&#39;re weighing priorities differently. Making priorities explicit is where the real decision-making happens.</pre><pre>## Proposed Solutions</pre><pre>### Option A: [Name]<br>Description of the approach.</pre><pre>**Pros:**<br>- ...<br>Description of the approach.</pre><pre>**Cons:**<br>- ...<br>**How this performs against priorities:**<br>- **Cost:** [How does this affect cost? High/Medium/Low impact and direction]<br>- **Development velocity:** [How does this affect velocity?]<br>- **Operational complexity:** [How does this affect ops complexity?]</pre><pre>**Estimated effort:** X weeks/months<br>**Risk level:** Low/Medium/High<br>**Other trade-offs:** [Anything else worth noting]</pre><pre>### Option B: [Name]<br>...</pre><pre>### Option C: Do Nothing<br>Often you should include this option. Sometimes the answer is &quot;not now&quot;.</pre><pre>## Recommendation (Optional!)<br>Which option do you recommend and why? Focus on how it aligns with the priorities you outlined above.</pre><pre>## Stakeholders<br>Who needs to be involved in this decision? Tag them.<br>- @backend-team (affected by implementation)<br>- @security-team (compliance implications)<br>- @product-owner (timeline impact)<br>- @infrastructure (operational concerns)</pre><pre>## Open Questions<br>What do you still need input on?</pre><pre>## Timeline<br>When does this decision need to be made? What&#39;s driving that deadline?</pre><h3>Why Priorities Trump Pros/Cons Lists</h3><p>You might notice this template doesn’t use “pros” and “cons” lists. That’s intentional.</p><p>A pros/cons list tells you <em>what</em> varies between options, but not <em>whether it matters</em>:</p><pre>Option A<br>Pros: Fast, Scalable<br>Cons: No access management</pre><p>This is almost useless unless everyone agrees on priority. Does speed matter more than access management? You don’t know. Different people will read this and come to different conclusions. The loud voices in the meeting will win, not the best decision.</p><p>The better approach: Make your priorities explicit first, then evaluate options against them. Now the same information tells a clear story:</p><pre>Priorities:<br>1. Must support access management (business requirement)<br>2. Performance under 500ms (SLA requirement)<br><br>Option A: Fast, Scalable, but no access management<br>→ Fails priority #1. Ruled out.<br><br>Option B: Slower but supports access management<br>→ Meets priority #1, still hits the 500ms target<br>→ Recommended</pre><p>When priorities are ranked and clear, the decision often becomes obvious. And when people disagree, you’re debating what <em>actually matters</em>, not arguing over vague trade-offs. This is where real consensus-building happens.</p><h3>Tagging the Right People</h3><p>This is important. Tag everyone who:</p><ol><li>Will be affected by the decision (they need to know)</li><li>Has relevant expertise (they can improve the decision)</li><li>Has authority over the affected area (they need to approve)</li><li>Will implement it (they’ll spot practical issues)</li></ol><p>Don’t tag the entire company. Be specific. In the notification or message, tell them explicitly: “Please read this RFC and leave your comments by [date]. We’ll have a decision meeting on [date].”</p><p>Be clear about what you need from them. Not everyone needs to deeply analyze every option — some people just need to flag if there’s a blocker from their perspective.</p><h3>Step 2: Async Review Period</h3><p>Give people 2–3 days to read and comment. This is where the magic happens. Async comments let people think before responding. They can do research. They can sleep on it. This produces much better feedback than putting people on the spot in a meeting.</p><h3>What Good Comments Look Like</h3><p>Encourage people to:</p><ul><li>Ask clarifying questions — “How would this handle X scenario?”</li><li>Raise concerns — “This might conflict with Y initiative”</li><li>Provide additional context — “We tried something similar before and hit Z issue”</li><li>Express preferences — “I lean toward Option B because…”</li><li>Suggest alternatives — “Have we considered W approach?”</li></ul><h3>Reply to Comments</h3><p>As the RFC author, actively engage with comments. Answer questions, acknowledge concerns, update the RFC if someone raises a valid point you missed. This builds consensus before the meeting.</p><h3>What If Nobody Comments?</h3><p>This happens. A few things could be going on:</p><ol><li>The decision isn’t important enough — Maybe this doesn’t need a formal process. Just decide.</li><li>People are too busy — Ping individuals directly. “Hey, I really need your input on Section 3.”</li><li>It’s too long/complex — Simplify. Add a TL;DR at the top.</li><li>People agree but are silent — Explicitly ask for “+1 if you’re okay with the recommendation”</li></ol><h3>Step 3: The Decision Meeting</h3><p>Now comes the synchronous part. But here’s the key: this is NOT a presentation meeting. Everyone should have already read the RFC and the comments. If they haven’t, that’s on them.</p><h3>Meeting Format</h3><p>I recommend a Working Session format rather than a presentation:</p><p>Duration: 30–60 minutes (not more)</p><p>Agenda:</p><ol><li>Quick context (2 min) — “We’re here to decide X. Everyone’s read the RFC”.</li><li>Address open questions (10–15 min) — Go through unresolved comments and open questions from the RFC</li><li>Discussion (15–30 min) — Debate the options, raise new concerns</li><li>Decision (5–10 min) — Make the call</li></ol><p>Who should be there:</p><ul><li>The RFC author (runs the meeting)</li><li>Key stakeholders who commented</li><li>The decision maker (if that’s not you)</li></ul><p>Keep the group small. 5–8 people max. Large meetings turn into status updates, not decision forums.</p><h3>Decision-Making Methods</h3><p>How do you actually make the decision? A few approaches:</p><p>Consensus: Everyone agrees. Ideal but not always realistic.</p><p>Consent: Nobody has strong objections. Different from consensus — you’re asking “can you live with this?” not “is this your favorite?”</p><p>RAPID/DRI: One person (the Directly Responsible Individual) makes the final call after hearing input. This is often best for architecture decisions where someone needs to own the outcome.</p><p>Voting: Can work for minor decisions but tends to create winners and losers. Use sparingly.</p><h3>What If You Can’t Agree?</h3><p>This happens. Some options:</p><p>Escalate: If there’s a clear owner or manager above the group, they can break the tie. This isn’t a failure — it’s what leadership is for.</p><p>Time-box: “Let’s try Option A for 3 months and revisit.” Not everything needs to be decided forever.</p><p>Do more research: If the disagreement is factual (“will this scale?”), maybe you need a spike or proof of concept before deciding.</p><p>Smaller scope: Sometimes you can agree on a subset. “We disagree on the long-term approach, but we agree on the first step.”</p><p>Acknowledge trade-offs: Sometimes people disagree because they’re weighing trade-offs differently. Make those explicit. “You’re prioritizing speed, I’m prioritizing maintainability. Let’s figure out which matters more for this specific situation.”</p><p>Don’t let disagreement fester. Unresolved architecture decisions turn into technical debt as people implement different approaches in parallel.</p><h3>Step 4: Write the ADR</h3><p>Once you’ve decided, document it. The Architecture Decision Record (ADR) is your permanent record. It’s different from the RFC — the RFC was about exploring options, the ADR is about recording what was decided.</p><h3>ADR Template</h3><pre># ADR-[number]: [Title]<br><br>**Date:** [Date]<br>**Status:** Accepted | Deprecated | Superseded by ADR-XXX<br>## Context<br>What is the issue that we&#39;re seeing that is motivating this decision?<br>## Decision<br>What is the change that we&#39;re proposing and/or doing?<br>## Consequences<br>What becomes easier or more difficult to do because of this change?<br>## Alternatives Considered<br>Brief summary of options that were rejected and why.</pre><p>IMO, you should keep ADRs short. They’re reference documents, not essays. Link back to the RFC if people want the full context. However, that’s up to you and your organization, every team writes ADRs a little different.</p><h3>Rollout and Checkpoints</h3><p>Making the decision is only half the battle. Ideally, an RFC should include a concrete rollout plan — not just “we’ll implement this” but a clear path from current state to the desired outcome.</p><h3>Why Rollout Plans Matter</h3><p>Many architecture decisions fail not because the decision was wrong, but because the implementation path was unclear or too ambitious. People push back on “massive paradigm shifts” not because they disagree with the direction, but because they can’t see how to get there incrementally.</p><h3>Who Writes the Rollout Plan?</h3><p>The RFC author should draft it, but the people who’ll actually implement it (the team leads, architects, tech leads) should refine it. They’ll spot what’s realistic and what’s not.</p><p>Include the rollout plan in the RFC. It’s part of the decision — not a separate implementation detail. This is often where the most important feedback comes from.</p><h3>When Stakeholders Need to Be Involved</h3><p>Not every architecture decision needs this full process. Here’s a rough guide:</p><p>Just decide (no RFC needed):</p><ul><li>Affects only your team</li><li>Easily reversible</li><li>Low cost to change later</li></ul><p>Lightweight RFC (async only, maybe no meeting):</p><ul><li>Affects 2–3 teams</li><li>Medium impact</li><li>Someone might have concerns</li></ul><p>Full process (RFC + meeting + stakeholders):</p><ul><li>Cross-cutting concerns (security, performance, cost)</li><li>Hard to reverse (database choices, API contracts)</li><li>Significant investment</li><li>Requires budget or headcount</li></ul><p>Executive involvement:</p><ul><li>Affects company strategy</li><li>Large budget implications</li><li>External vendor commitments</li><li>Compliance or legal implications</li></ul><p>For the big decisions, you might need to present a summary to leadership. That’s different from the technical decision meeting — it’s about getting approval and resources, not debating technical trade-offs.</p><h3>AI-Assistance</h3><p>LLMs are actually useful for RFC preparation. You can use them to:</p><ul><li>Research trade-offs between technologies</li><li>Generate initial drafts of pros/cons lists</li><li>Summarize documentation for technologies you’re evaluating</li><li>Identify edge cases you might have missed</li></ul><p>The point being though is NOT to let an LLM write the entire RFC for you and you just publish it, but rather to use an LLM as an immediate thought partner.</p><h3>Common Pitfalls</h3><p>A few things I’ve seen go wrong:</p><p>Analysis paralysis: The RFC process becomes an excuse to never decide. Set deadlines and stick to them. “We’ll decide on Friday with the information we have”.</p><p>Too many stakeholders: Everyone has an opinion, nothing gets decided. Be explicit about who has input vs. who has decision authority.</p><p>No follow-through: You make the decision but don’t write the ADR. Six months later, nobody remembers why. Write the ADR the same day as the decision meeting.</p><p>Ignoring the quiet people: In meetings, loud voices dominate. The async comment period is where quieter team members can contribute. Value those comments.</p><h3>Wrapping Up</h3><p>This is the process that I’ve found to work best while not taking up a lot of time. And it really isn’t complicated:</p><ol><li>Write an RFC with clear options and trade-offs</li><li>Give people time to read and comment async</li><li>Have a focused meeting to make the decision</li><li>Document it in an ADR</li></ol><p>How detailed you make the RFC, the meeting, the ADR, and also how long the deadlines are, is really something that depends on your team. The good thing is, it works with both, a lot of detail and formalities, and quick low bureaucracy startup style.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ab82e5384d2f" width="1" height="1" alt=""><hr><p><a href="https://itnext.io/how-to-make-architecture-decisions-rfcs-adrs-and-getting-everyone-aligned-ab82e5384d2f">How to Make Architecture Decisions: RFCs, ADRs, and Getting Everyone Aligned</a> was originally published in <a href="https://itnext.io">ITNEXT</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Automate Your Code Reviews: Welcome to the Post-AI Era of Software Development.]]></title>
            <link>https://itnext.io/automate-your-code-reviews-welcome-to-the-post-ai-era-of-software-development-169ba917cc48?source=rss----5b301f10ddcd---4</link>
            <guid isPermaLink="false">https://medium.com/p/169ba917cc48</guid>
            <category><![CDATA[claude]]></category>
            <category><![CDATA[cybersecurity]]></category>
            <category><![CDATA[careers]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[devops]]></category>
            <dc:creator><![CDATA[Andrew Blooman]]></dc:creator>
            <pubDate>Fri, 10 Apr 2026 10:27:17 GMT</pubDate>
            <atom:updated>2026-04-10T10:27:16.616Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*bQ0bUxM97DP8ta3WObdLMQ.png" /></figure><h4>What Does the SDLC Look Like In 2026?</h4><h3>How AI-native workflows are reshaping how we build, ship, and secure software</h3><p>The way we build software has fundamentally changed. Not in the incremental sense — a new framework here, a better CI tool there — but in the structural sense. The time it takes to go from an idea on a white board to a production ready app has been cut down from weeks to days, if not hours.</p><p>We’ve entered what I believe to be the Post-AI era of software development. That being, a time in tech history where AI isn’t a bolted-on assistant, chatbot or afterthought, but the primary vehicle for transforming ideas into deployed systems. This shift doesn’t eliminate developers, rather, it redefines what software development means, who participates, and what “good” looks like at every stage of the software development lifecycle (SDLC).</p><p>This article explores three distinct workflows that define this new era:</p><ul><li><strong>Prototyping</strong></li><li><strong>Structured Feature Development</strong></li><li><strong>Day 2 Operations</strong></li></ul><p>I will examine how repository management itself must evolve to support AI as a first principle when structuring a code base.</p><h3>Prototyping: The Rise of Vibe Coding</h3><h4>From Whiteboard to Working App</h4><p>There’s a phrase gaining traction in engineering circles: <em>vibe coding</em>. It sounds dismissive, but it describes something genuinely transformative. A product manager, a designer, or any non-technical person, can now describe what they want in natural language and have a working app built for them, without any develops required.</p><p>Tools like Claude, Cursor and Lovable have made it possible for someone with a clear idea and little to no coding experience, to produce functional web applications, internal dashboards, data visualisation tools, even simple APIs. When it comes to prototyping, the barrier to entry hasn’t just been lowered, it has been eliminated.</p><h4>What Vibe Coding Actually Looks Like</h4><p>A typical vibe coding session might unfold like this:</p><p>A business analyst needs a tool to visualise quarterly vulnerability data across teams. Previously, this would mean filing a ticket with a dev team, waiting weeks for resource to become available, negotiating the scope of work, and after all of that time, what is the outcome…receiving something that doesn’t quite match the original vision.</p><p>In the Post-AI world, that analyst opens Claude or a similar tool, describes what they need; the basic layout, references some data to source, some workflows and it begins to write the code which returns tangible results within minutes.</p><p>The code produced might not be production-grade, often falling foul of older code versions with security issues, or outdated methods. But in most cases, it just works, and it could in theory be ready to be deployed for limited use, delivering return on investment quickly.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/747/1*TRT9d0CQXU20_Uo2SfQXLw.png" /></figure><p>Now of course, we all know there is far more to it than that, vibe coding skips over security, resiliency, backups, high availability, performance, scaling etc etc hence why it still is referred to as vibe coding.</p><h4>The Strategic Value of Disposable Software</h4><p>This is the key insight that many organisations miss: vibe-coded prototypes aren’t meant to last. They’re disposable by design. Their value lies in quickly showing if an idea should go from prototype, to the backlog and made production grade. Plus, one of the biggest challenges for product managers is understanding what the business is asking for, a disposable product provides for more guidance, meaning less likelihood of the finished product not meeting expectations.</p><h4>Guardrails, Not Gates</h4><p>The risk with vibe coding is obvious, but the practice shouldn’t be outlawed, rather curated and shaped into something highly productive. I can’t think of a better example than Lovable; which allows users to see their code running within a browser; no wait time to provision cloud infrastructure.</p><p>Doing this internally is a lot more tricky, but possible; platform engineering teams and security teams must outline the guardrails of what is possible and what is allowed so that engineering teams can have a degree of freedom, whilst knowing the guardrails will prevent inadvertent exposure of systems.</p><p>Smart organisations are deploying template repositories pre-configured with security scanning, deploying to sandboxed environments by default, and providing AI assistants that are pre-loaded with organisational security policies. The goal is to make the path of least resistance also the path of reasonable security.</p><h3>New Features: Structured AI-Assisted Development</h3><h4>Beyond Autocomplete</h4><p>If vibe coding represents the democratisation of software creation, structured AI-assisted development represents its professionalisation. This is where experienced engineers use AI not as a novelty but as an accelerator, embedding “AI first” processes into their daily work stream.</p><p>The biggest revolution in AI was the emergence of AI coding agents; tools like Claude Code, GitHub Copilot, and Cursor. These coding assistants have gone far beyond autocomplete engines, now acting as autonomous collaborators capable of understanding project context, architectural designs, and security controls.</p><h4>The Claude Code Workflow</h4><p>Claude Code exemplifies the structured approach; rather than generating code in isolation, it operates within the full context of the repository, guided by explicit instructions that shape its behaviour. Lets walk through the workflow to understand how a developer works in the post-AI era.</p><p><strong>Scenario:</strong> Imagine you’re a developer and have been asked to add rate limiting to an API. In the pre-AI world, you’d read the requirements, create a branch, spend a few hours writing code, push it, wait for review, address comments, and iterate. In the Post-AI world, the workflow looks more like this:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*LEFwbLd_3YwsL_4mZLI_oA.png" /></figure><p><strong>Step 1 — Write the issue.</strong> You start by writing a GitHub issue. This is effectively your prompt. But good prompt engineering teaches us that a well structured prompt should include<strong> the acceptance criteria</strong>, code snippets, related logs, and any <strong>constraints</strong>. The quality of the issue directly determines the quality of the AI’s output.</p><p><strong>Step 2 — Open Claude Code in plan mode.</strong> Plan mode is critical; it tells the agent to reason about the approach before writing a single line of code. It will read the CLAUDE.md file, examine the repository structure, understand the existing code, and propose an implementation plan for your review. This allows you to get an idea if the plan will meet the criteria of the bug, feature, fix you are implementing.</p><p><strong>Step 3 — Point the agent at the issue.</strong> With a well crafted Issue, your prompt can be quite simple here, but ensure it follows a standard workflow: create a new branch and work on issue #15. Claude Code reads the issue from GitHub, creates a feature branch following your naming conventions (defined in CLAUDE.md), and begins generating code.</p><p><strong>Step 4 — Automated security scanning.</strong> Before pushing any code to GitHub, the Snyk scanning skill fires automatically. Claude Code runs snyk_code_scan against the new and modified code, analyses the results, and fixes any issues it finds. It rescans. If clean, it moves on. If not, it iterates. This loop of scan, fix, rescan occurs based on rules in CLAUDE.md.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/1*YkUaxA9LIEBkctiuyL3nCA.png" /></figure><p><strong>Step 5 — Push and trigger review.</strong> You instruct Claude to push the code to the repository and then create Pull Request. This can then trigger subsequent workflows like automated AI code reviews. Depending on if you have GitHub Co-pilot, it will automatically review the code and make suggestions (I have found copilot to be particularly good at code reviews here).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/926/1*2FahvlSlY1cV0vldIfpe4w.png" /></figure><p><strong>Step 6 — The Security Engineer Agent.</strong> This is where the CI pipeline takes over and something genuinely new happens. A GitHub Actions workflow triggers on the PR: it builds the container image, then runs a Snyk container scan against the built artefact ; testing not just your code, but the entire supply chain of base image layers, system packages, and runtime dependencies that your application will actually ship with.</p><p>I’ve added a stage to perform a code review, analyse the SNYK findings and the post the results to the PR using a Github Bot. This gives a detailed breakdown of the findings with a binary Pass/Fail result.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/904/1*jnWLzNyEiCADzrHiiXTBAg.png" /></figure><p>This is the security engineer agent pattern doesn’t replace a human security team, but it provides consistent security reviews on every single pull request, at a speed that no human team could sustain across an entire organisation’s PR volume. The human AppSec engineer reviews the agent’s sign-offs periodically, investigates escalations, and focuses their expertise on the architectural and design-level security questions that genuinely require human judgement.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/915/1*GL2pCrOZDWDUU-7szEtBgA.png" /></figure><p><strong>Step 7 — Address review feedback with AI.</strong> You switch back to Claude Code, still in plan mode, and type: check the PR comments on issue #15. Claude reads the Copilot review comments and the security reviewer&#39;s findings, analyses them in context, and either applies the suggested changes or explains why a comment doesn&#39;t apply. You review its reasoning, approve or adjust, and move on.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/885/1*nQVtYtVc4NuPJMQZbrTHnw.png" /></figure><p><strong>Step 8 — Close the loop.</strong> With the review comments addressed, you give Claude a final set of instructions: commit and push the changes, update the PR, and close issue #15. Claude commits, pushes, updates the PR description with a summary of changes, and closes the GitHub issue with a reference to the merged PR. The entire coding session is completed without significant human interaction.</p><h3>GitHub Repo</h3><p>If you want to copy my workflow, here is the GitHub link</p><p><a href="https://github.com/andrewblooman/secure-container-build">GitHub - andrewblooman/secure-container-build</a></p><h4>The Developer Experience Shift</h4><p>This is not the end of software engineers! Moreover, AI-assisted development doesn’t reduce developers to prompt engineers. The software engineering world may have changed, but developers are still very much in demand.</p><p>Developers now spend more time on architecture, system design, requirements analysis, and review, and less time on the mechanical translation of designs into code. The developers who thrive in this model are those who can clearly articulate what they want, understand the code the AI produces, and make sound judgements about when to accept, modify, or reject AI-generated contributions.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*6JwyaH6kBk8pcCR9nrhn8w.png" /></figure><h3>Day 2 Operations: Autonomous Security and Maintenance</h3><h4>The Vulnerability Lifecycle, Automated</h4><p>The ongoing operation, maintenance, and security of software after it’s been deployed is where most engineering effort has always been consumed. It’s also where AI-driven automation delivers some of its most compelling returns.</p><p>The traditional vulnerability management lifecycle is very manual: a scanner finds an issue, a ticket is created, a developer is assigned, the developer context-switches from their current work, investigates the vulnerability, determines a fix, raises a PR, waits for review, and finally merges. Elapsed time from detection to remediation is measured in days or weeks. Given recent supply chain attacks with Trivy and Axios, this delay could be costly.</p><h4>Automated Detection-to-Remediation Pipelines</h4><p>In the Post-AI model, the pipeline from vulnerability detection to remediation can be substantially automated:</p><p><strong>Detection.</strong> Cloud-native application protection platforms (CNAPPs) like Wiz, combined with SAST/SCA tools like Snyk, continuously scan running environments and code repositories. When a vulnerability is detected, whether a CVE in a dependency or a code-level security flaw, an event is triggered.</p><p><strong>Automated PR Generation.</strong> If we take a closer look at Snyk, it has the capability to analyse code in the repository, determines the appropriate fix, and raises a pull request to resolve the issue. For dependency vulnerabilities, this might be a version bump with compatibility verification. For code-level issues, it might be a refactored function or an added input validation check.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/908/1*1_ewd8epUias76cesaM03A.png" /></figure><p>Just to give a bit more context, here is GitHub Copilot doing an automated review of the Snyk fix. We now have a way of checking if our AI powered AppSec tool is suggesting a safe fix by validating it against another AI agent.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/829/1*F_UcqPTy0NLzi71H7Obx2w.png" /></figure><p><strong>AI-Powered Review of Automated Fixes.</strong> GitHub Copilot or Claude Code examines the proposed fix to ensure it doesn’t introduce regressions, break existing tests, or create new security issues. The AI reviewer has the context of the full codebase and the CLAUDE.md conventions, this means it&#39;s not reviewing the fix in isolation.</p><p><strong>Human Approval as the Final Gate.</strong> The developer’s role shifts from <em>doing the fix</em> to <em>approving the fix</em>. They review a PR that already has AI-generated code, AI-generated review comments, passing CI checks, and security scan results.</p><blockquote>In short, AI’s job is to provide reasoning, A Developer’s job is to apply judgement.</blockquote><p>A developer should be asking:</p><ul><li>Does this fix make sense?</li><li>Does it align with the system’s architectural intent?</li><li>Is there a better approach the AI missed?</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*0If40Yui7GhEj7NhFOpFUA.png" /></figure><h4>The Post-AI Repository: Structure for Human-AI Collaboration</h4><p>The three workflows described above share a common dependency: a well-structured repository that provides AI agents with the context they need to operate effectively. In the Post-AI era, repository structure isn’t just about human developer experience; it’s about AI developer experience.</p><p>A repository that is well-organised for AI collaboration will produce better AI-generated code, more accurate AI reviews, and more reliable automated operations. Conversely, a poorly structured repository will produce inconsistent, insecure and <strong>potentially dangerous AI output.</strong></p><h3>Recommended Repository Structure</h3><p>Here is an example of how you might want to structure your application repo. I’ve keep the infrastructure separate (happy to debate people on that!), but I’ve included some additional configuration such as project skills and commands, a security folder for storing scans (if required) as well as a folder for threat modelling (useful if using AWS threat composer).</p><p>I also suggest a data folder storing a schema of the database which this app uses; this is critically important if handling PII, PHI or financial data. This is handy for the AI to know so that it doesn’t accidentally start logging sensitive data into Cloudwatch or S3.</p><pre>my-project/<br>├── .claude/                        # Claude Code configuration root<br>│   ├── settings.json               # Project-level Claude Code settings<br>│   ├── skills/                     # Custom skill definitions<br>│   │   ├── security-scan.md        # Security scanning skill<br>│   │   ├── database-migration.md   # Database migration skill<br>│   │   ├── api-design.md           # API design conventions<br>│   │   └── test-generation.md      # Test generation standards<br>│   └── commands/                   # Custom slash commands<br>│       ├── review.md               # /review — trigger full code review<br>│       ├── scan.md                 # /scan — run security scans<br>│       ├── migrate.md              # /migrate — generate DB migration<br>│       └── deploy-check.md         # /deploy-check — pre-deployment validation<br>│<br>├── .github/                        # GitHub-specific automation<br>│   ├── workflows/                  # GitHub Actions<br>│   │   ├── ci.yml                  # Standard CI pipeline<br>│   │   ├── security-scan.yml       # Automated security scanning<br>│   │   ├── claude-pr-review.yml    # Claude Code PR review trigger<br>│   │   └── copilot-review.yml     # Copilot review configuration<br>│   ├── copilot-instructions.md     # Repository-level Copilot instructions<br>│   └── CODEOWNERS                  # Code ownership for review routing<br>│<br>├── security/                       # Security-as-code directory<br>│   ├── scans/                      # Scan results and baselines<br>│   │   ├── .gitkeep<br>│   │   └── snyk-baseline.json      # Known/accepted vulnerability baseline<br>│   ├── reports/                    # Generated security reports<br>│   │   ├── .gitkeep<br>│   │   └── templates/              # Report templates<br>│   │       └── vuln-report.md<br>│   └── threats/                    # Threat models and assessments<br>│       ├── threat-model.json       # AWS Threat Composer / custom format<br>│       ├── data-flow-diagram.md    # System data flow documentation<br>│       └── attack-surface.md       # Attack surface inventory<br>│<br>├── data/                           # Data schema and classification<br>│   ├── schema/                     # Data models and schemas<br>│   │   ├── database/               # Database schemas<br>│   │   │   ├── schema.sql<br>│   │   │   └── migrations/<br>│   │   ├── api/                    # API schemas<br>│   │   │   └── openapi.yaml<br>│   │   └── events/                 # Event schemas<br>│   │       └── event-catalog.json<br>│   ├── classification/             # Data classification metadata<br>│   │   ├── data-dictionary.json    # Field-level data dictionary<br>│   │   ├── pii-inventory.md        # PII field inventory and handling rules<br>│   │   ├── phi-inventory.md        # PHI field inventory (if applicable)<br>│   │   └── retention-policy.json   # Data retention requirements<br>│   └── synthetic/                  # Synthetic/test data<br>│       ├── fixtures/               # Test fixtures (never real data)<br>│       └── generators/             # Data generation scripts<br>│<br>├── docs/                           # Project documentation<br>│   └── architecture/               # Architecture decision records<br>│       ├── ADR-001-auth-flow.md<br>│       └── system-context.md       <br>│<br>├── src/                            # Application source code<br>├── tests/                          # Test suites<br>│<br>├── CLAUDE.md                       # Primary AI agent instructions<br>├── AGENTS.md                       # Multi-agent coordination rules (optional)<br>├── .snyk                           # Snyk configuration (root level)<br>├── .gitignore                      # Git ignore rules<br>└── README.md                       # Project overview</pre><h3>The CLAUDE.md File: Your AI’s Operating Manual</h3><p>The CLAUDE.md file is the single most important file in a Post-AI repository. It is the instruction set that governs how AI agents interact with your codebase. A well-crafted CLAUDE.md dramatically improves the quality, consistency, and safety of AI-generated contributions.</p><p>Here is a template that encodes security-first development practices:</p><pre># CLAUDE.md — Project AI Agent Configuration<br><br>## Project Overview<br>- **Name**: [Project Name]<br>- **Description**: [Brief description of what this project does]<br>- **Language(s)**: [Primary languages, e.g., Python 3.12, TypeScript 5.x]<br>- **Framework(s)**: [Key frameworks, e.g., FastAPI, React, Express]<br>- **Deployment**: [Where and how this runs, e.g., AWS ECS Fargate, Kubernetes]<br>- **Classification**: [Internal / Confidential / Public]<br><br>## Architecture Context<br>- See `docs/architecture/system-context.md` for system context diagram<br>- See `docs/architecture/` for Architecture Decision Records<br>- This service communicates with: [list upstream/downstream dependencies]<br>- Authentication: [e.g., OIDC via Entra ID, API keys via Vault]<br><br>## Coding Standards<br>- Follow the style guide in `.editorconfig` and linter configurations<br>- All functions must have type hints (Python) or TypeScript types<br>- Maximum function length: 50 lines — decompose beyond this<br>- All public APIs must have docstrings/JSDoc<br>- No hardcoded secrets, tokens, or credentials — use environment variables or Vault<br>- Error handling: never swallow exceptions silently<br>- Logging: use structured logging (JSON format) with correlation IDs<br><br>## Testing Requirements<br>- All new code must include unit tests with &gt;80% coverage<br>- Integration tests required for any external service interaction<br>- Test files mirror source structure: `src/foo/bar.py` → `tests/foo/test_bar.py`<br>- Use fixtures from `data/synthetic/fixtures/` — never use real data in tests<br>- Run tests before committing: `make test` or `npm test`<br><br>## Security Rules<br><br>### Mandatory Scanning<br>- Always run **snyk_code_scan** tool for new first-party code that is generated<br>  in a Snyk-supported language.<br>- If any security issues are found based on newly introduced or modified code or<br>  dependencies, attempt to fix the issues using the results context from **Snyk**.<br>- Rescan the code after fixing the issues to ensure that the issues were fixed<br>  and that there are no newly introduced issues.<br>- Repeat this process until no new issues are found.<br><br>### Dependency Management<br>- Never add a dependency without checking for known vulnerabilities<br>- Pin all dependency versions — no floating ranges in production<br>- If updating a dependency, run `snyk test` before committing<br>- Document the reason for any new dependency in the PR description<br><br>### Data Handling<br>- Consult `data/classification/pii-inventory.md` before handling any user data<br>- PII fields must be encrypted at rest and masked in logs<br>- PHI fields (see `data/classification/phi-inventory.md`) require additional<br>  access controls and audit logging<br>- Never include real PII/PHI in test data — use synthetic generators in<br>  `data/synthetic/generators/`<br>- All data access must be logged with user identity and timestamp<br><br>### Threat Awareness<br>- Review `security/threats/threat-model.json` before modifying authentication,<br>  authorisation, data flows, or external integrations<br>- If a change introduces a new data flow or external dependency, update the<br>  threat model accordingly<br><br>## Git Conventions<br>- Branch naming: `feature/TICKET-123-short-description`<br>- Commit messages: Conventional Commits format<br>  (`feat:`, `fix:`, `security:`, `docs:`)<br>- PRs must reference a ticket number<br>- PRs must have a description explaining *why*, not just *what*<br>- Squash merge to main — keep history clean<br><br>## File and Folder Guidance<br>- `security/scans/` — output directory for scan results; baseline files are<br>  version-controlled<br>- `security/threats/` — threat models should be updated when architecture changes<br>- `security/reports/` — generated reports; templates are version-controlled,<br>  generated outputs are gitignored<br>- `data/classification/` — the AI must consult these files before generating code<br>  that handles user data<br>- `docs/architecture/` — the AI should read relevant ADRs before making<br>  architectural changes<br>- `docs/runbooks/` — the AI should reference these during incident-related work<br><br>## What Not To Do<br>- Do not commit scan results to the repository (they are gitignored)<br>- Do not bypass failing security scans with ignore rules without documentation<br>- Do not generate mock implementations of security controls<br>- Do not introduce new external network calls without documenting them<br>- Do not store secrets, tokens, or API keys anywhere in the repository</pre><h3>Supporting Files That Improve AI Operations</h3><p>Beyond the core structure, several additional files significantly improve AI agent performance:</p><p><strong>AGENTS.md</strong> — When multiple AI agents operate on the same repository (Claude Code for development, Copilot for review, a security bot for scanning), this file defines coordination rules: which agent owns which workflow, how conflicts are resolved, and what each agent should and shouldn&#39;t modify.</p><p><strong>docs/architecture/</strong> — A series of context documents that helps AI agents understand where this service fits in the broader system. Without this, AI-generated code tends to make incorrect assumptions about upstream and downstream dependencies.</p><p><strong>data/classification/data-dictionary.json</strong> — A machine-readable data dictionary that maps field names to their classification (PII, PHI, financial, public). AI agents can reference this automatically when generating code that handles data, ensuring appropriate encryption, masking, and access controls are applied without the developer having to specify them.</p><h3>The Death of Pre-Commit: A Pre-AI Relic</h3><p>It’s worth pausing on one notable absence from this repository structure: .pre-commit-config.yaml.</p><p>Pre-commit hooks were a staple of the pre-AI development workflow, and for good reason. They caught formatting inconsistencies, linting violations, secret leaks, and basic security issues before code ever reached a pull request. They were the last line of local defence — a gate that ran on the developer’s machine, enforcing standards at the point of commit.</p><p>In the Post-AI era, pre-commit hooks are largely redundant. When an AI agent is generating the code, it is already operating within the constraints defined by CLAUDE.md — writing code that conforms to the project&#39;s style guide, applying the correct formatting, running Snyk scans as part of its generation loop, and checking for secrets before it ever proposes a commit. The AI doesn&#39;t need a pre-commit hook to remind it to lint; linting is baked into its operating instructions.</p><p>Pre-commit hooks depended on each developer having it correctly installed and not bypassing it with --no-verify This is no longer the control point it once was. The standards it enforced are now enforced more reliably by the agents that write and review the code.</p><h3>Closing Thoughts</h3><p>The more time I spend using AI and researching AI; it almost makes me angry at how poorly people are using it. On the other hand I see people using “expensive” plugins (I am referring to token consumption) which ultimately yield a simple dashboard tool which they run locally using python or node?!</p><p>It is slightly baffling how with all these tools, the fundamentals remain constant; if you don’t know how to run an app on infrastructure, you are limited to what you can achieve will pure vibe coding. I’ve seen this first hand; people creating tool after tool, but not delivering that tangible value back to the business.</p><p>The last people think about today is security; which is why bootstrapping code repositories with default structures, setting up the scaffolding to help AI agents understand and code more effectively is so, so important! If you can take a way the cognitive load for a developer by just baking in all the security controls, then the AI should have the knowledge to ensure that no code leaves the IDE without passing the security tests.</p><p>The Day 2 operational burdens can be reduced from days and weeks to minutes to hours, from each sprint, to allow AI accelerated tech debt remediation. The amazing thing about security tools is just how easy they are to drop into the developer’s lifecycle; with AI it doesn&#39;t matter if you are using Akido, Snyk, Semgrep, Wiz et al, you are essentially just taking scan results and then using an AI reasoning engine to interpret the results.</p><p>There should be no excuse from either the security team or the the development team on performing security reviews…neither of them should be doing it! Let AI perform the review and the human make the judgement call. Hopefully you’ve found this article relatable, useful and can add some value back to your own organisation.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=169ba917cc48" width="1" height="1" alt=""><hr><p><a href="https://itnext.io/automate-your-code-reviews-welcome-to-the-post-ai-era-of-software-development-169ba917cc48">Automate Your Code Reviews: Welcome to the Post-AI Era of Software Development.</a> was originally published in <a href="https://itnext.io">ITNEXT</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to Get Specific Validation Errors with Angular Signal Forms]]></title>
            <description><![CDATA[<div class="medium-feed-item"><p class="medium-feed-image"><a href="https://itnext.io/how-to-get-specific-validation-errors-with-angular-signal-forms-1421cdfa82bc?source=rss----5b301f10ddcd---4"><img src="https://cdn-images-1.medium.com/max/2560/1*lqOQcTiMxWsIenn3kEFvOQ.jpeg" width="2560"></a></p><p class="medium-feed-snippet">Angular just introduced one of the biggest upgrades to forms since the framework was created: Signal Forms.</p><p class="medium-feed-link"><a href="https://itnext.io/how-to-get-specific-validation-errors-with-angular-signal-forms-1421cdfa82bc?source=rss----5b301f10ddcd---4">Continue reading on ITNEXT »</a></p></div>]]></description>
            <link>https://itnext.io/how-to-get-specific-validation-errors-with-angular-signal-forms-1421cdfa82bc?source=rss----5b301f10ddcd---4</link>
            <guid isPermaLink="false">https://medium.com/p/1421cdfa82bc</guid>
            <category><![CDATA[web-development]]></category>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[front-end-development]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[angular]]></category>
            <dc:creator><![CDATA[Brian Treese]]></dc:creator>
            <pubDate>Fri, 10 Apr 2026 09:14:07 GMT</pubDate>
            <atom:updated>2026-04-10T09:14:06.064Z</atom:updated>
        </item>
        <item>
            <title><![CDATA[A Modern GUI for DynamoDB Local: Because Developer Experience Matters]]></title>
            <description><![CDATA[<div class="medium-feed-item"><p class="medium-feed-image"><a href="https://itnext.io/a-modern-gui-for-dynamodb-local-because-developer-experience-matters-8aae47946d9f?source=rss----5b301f10ddcd---4"><img src="https://cdn-images-1.medium.com/max/2600/1*UkRkeZZvfFjlP4MmgFDn7g.png" width="5344"></a></p><p class="medium-feed-snippet">The Problem Nobody Talks About</p><p class="medium-feed-link"><a href="https://itnext.io/a-modern-gui-for-dynamodb-local-because-developer-experience-matters-8aae47946d9f?source=rss----5b301f10ddcd---4">Continue reading on ITNEXT »</a></p></div>]]></description>
            <link>https://itnext.io/a-modern-gui-for-dynamodb-local-because-developer-experience-matters-8aae47946d9f?source=rss----5b301f10ddcd---4</link>
            <guid isPermaLink="false">https://medium.com/p/8aae47946d9f</guid>
            <category><![CDATA[tailwind-css]]></category>
            <category><![CDATA[typescript]]></category>
            <category><![CDATA[docker]]></category>
            <category><![CDATA[dynamodb]]></category>
            <category><![CDATA[development-tools]]></category>
            <dc:creator><![CDATA[Hoang Dinh]]></dc:creator>
            <pubDate>Fri, 10 Apr 2026 09:12:59 GMT</pubDate>
            <atom:updated>2026-04-10T09:12:58.400Z</atom:updated>
        </item>
        <item>
            <title><![CDATA[The Foregone Solution: Why Your Engineers Keep Getting it “Wrong”]]></title>
            <description><![CDATA[<div class="medium-feed-item"><p class="medium-feed-image"><a href="https://itnext.io/the-foregone-solution-why-your-engineers-keep-getting-it-wrong-5a01e04cae76?source=rss----5b301f10ddcd---4"><img src="https://cdn-images-1.medium.com/max/2600/0*WOufzT1qjEn3i8-r" width="6240"></a></p><p class="medium-feed-snippet">The hidden cost of delegating a problem you&#x2019;ve already solved</p><p class="medium-feed-link"><a href="https://itnext.io/the-foregone-solution-why-your-engineers-keep-getting-it-wrong-5a01e04cae76?source=rss----5b301f10ddcd---4">Continue reading on ITNEXT »</a></p></div>]]></description>
            <link>https://itnext.io/the-foregone-solution-why-your-engineers-keep-getting-it-wrong-5a01e04cae76?source=rss----5b301f10ddcd---4</link>
            <guid isPermaLink="false">https://medium.com/p/5a01e04cae76</guid>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[career-development]]></category>
            <category><![CDATA[management]]></category>
            <category><![CDATA[organizational-culture]]></category>
            <dc:creator><![CDATA[Dave Taubler]]></dc:creator>
            <pubDate>Thu, 09 Apr 2026 19:10:57 GMT</pubDate>
            <atom:updated>2026-04-09T19:10:55.756Z</atom:updated>
        </item>
    </channel>
</rss>