<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>victoria.dev</title><link>https://victoria.dev/</link><description>Engineering leader, OWASP contributor, and educator helping millions of engineers build secure software for the next era of technology.</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><managingEditor>hello@victoria.dev (Victoria Drake)</managingEditor><webMaster>hello@victoria.dev (Victoria Drake)</webMaster><lastBuildDate>Thu, 03 Jul 2025 04:04:18 -0500</lastBuildDate><atom:link href="https://victoria.dev/index.xml" rel="self" type="application/rss+xml"/><item><title>Why the Best Engineers Will Thrive Alongside AI</title><link>https://victoria.dev/posts/why-the-best-engineers-will-thrive-alongside-ai/</link><pubDate>Thu, 03 Jul 2025 04:04:18 -0500</pubDate><author>hello@victoria.dev (Victoria Drake)</author><guid>https://victoria.dev/posts/why-the-best-engineers-will-thrive-alongside-ai/</guid><description>AI won't replace great engineers—it will amplify them. How to position yourself as an engineering leader who thrives in an AI-augmented development world.</description><content:encoded><![CDATA[ <p>Every time I see another &ldquo;AI will replace programmers&rdquo; headline, I think about the best engineers I&rsquo;ve worked with. They&rsquo;re not the ones who write the most code or know the most algorithms. They&rsquo;re the ones who see problems clearly, design elegant solutions, and build systems that last. AI won&rsquo;t replace these people. It will make them unstoppable.</p>
<p>The engineers who thrive in an AI-augmented world won&rsquo;t be fighting against the technology or ignoring it. They&rsquo;ll understand how to amplify their strengths through intelligent collaboration with AI systems. Instead of asking &ldquo;Will AI take my job?&rdquo; they&rsquo;re asking &ldquo;How can AI make me 10x more effective at the work that matters most?&rdquo;</p>
<p>What&rsquo;s fascinating is that the skills that make you great at working with AI are remarkably similar to the skills that make you great at working with other engineers. Clear communication, structured thinking, and productive division of labor are fundamentals that remain constant whether you&rsquo;re pair programming with a colleague or collaborating with an AI model.</p>
<p>Here&rsquo;s what that collaboration looks like in practice, and how to position yourself to lead in an AI-first world.</p>
<h2 id="ai-amplifies-systems-thinking-through-better-collaboration">AI Amplifies Systems Thinking Through Better Collaboration</h2>
<p>The biggest opportunity comes from using AI to think through complex systems more thoroughly. AI excels at analyzing patterns, suggesting edge cases, and helping you reason through architectural decisions. Engineers who learn to collaborate effectively with AI on design and planning create better systems than either could build alone.</p>
<p>This mirrors how the best engineering teams work together. When you&rsquo;re designing a system with a colleague, you externalize your thinking, challenge each other&rsquo;s assumptions, and explore alternatives. Working with AI requires the same discipline. You articulate problems clearly, make your constraints explicit, and iterate on solutions collaboratively.</p>
<p>The practical skill here involves having productive conversations with AI about system design in the same way you would with a colleague.</p>
<blockquote>
<p>Start by clearly defining the problem space: What are the constraints? What are the non-obvious requirements? What could go wrong? AI can help you explore these questions more comprehensively before you commit to solutions.</p>
</blockquote>
<p>This resembles how senior engineers mentor junior team members—by asking good questions and helping them think through problems systematically. The difference is that AI can process vast amounts of information quickly and suggest patterns you might not have considered.</p>
<p>Start practicing this now by using AI to review your design documents, challenge your assumptions, and suggest alternatives. The goal is ensuring you consider angles you might have missed, just like getting a thorough code review from a thoughtful colleague.</p>
<h2 id="human-skills-become-your-competitive-advantage">Human Skills Become Your Competitive Advantage</h2>
<p>As AI handles more routine implementation work, the uniquely human aspects of engineering become increasingly important. Understanding and communicating business context, navigating organizational complexity, and making judgment calls under uncertainty—these skills differentiate great engineers from good ones.</p>
<p>Product intuition becomes especially critical. AI can generate code, but it can&rsquo;t determine whether you&rsquo;re building the right thing for solving your customers&rsquo; problems. Engineers who understand user needs, translate business requirements into technical solutions, and make trade-offs based on strategic priorities remain indispensable.</p>
<blockquote>
<p>These are the same skills that make you valuable on any engineering team. The ability to see the bigger picture, understand stakeholder needs, and make technical decisions that serve business objectives has always been what separates senior engineers from code writers.</p>
</blockquote>
<p>The ability to work across disciplines becomes more valuable as well. The best AI implementations often require understanding domain expertise, user experience implications, and business impact. Engineers who can bridge these contexts design better AI integrations, just as they design better systems when working with product managers, designers, and other stakeholders.</p>
<p>Communication skills get amplified too. Evaluating options, explaining trade-offs, and building consensus around technical decisions becomes crucial when AI can generate multiple potential solutions quickly. You&rsquo;re curating and contextualizing solutions rather than just implementing them—much like how lead engineers guide technical discussions and help teams make good decisions collectively.</p>
<h2 id="building-ai-native-systems-from-the-ground-up">Building AI-Native Systems From the Ground Up</h2>
<p>The most significant opportunities lie in designing systems built around AI capabilities from the beginning, rather than retrofitting AI into existing architectures. This requires thinking differently about how software systems work and collaborating effectively during the design process.</p>
<p>AI-native systems often need different patterns for data flow, error handling, and user interaction. They might handle probabilistic outcomes rather than deterministic ones, incorporate continuous learning loops, and provide transparency into decision-making processes. Engineers who understand these patterns early will have a significant advantage.</p>
<p>This resembles the transition any engineering team makes when adopting new paradigms. The teams that succeed are those that collaborate well during the learning process, share knowledge effectively, and iterate toward better patterns together.</p>
<p>Working with AI also means getting comfortable with a different development workflow. Instead of writing every function from scratch, you might orchestrate AI services, design feedback loops for model improvement, and build systems that get smarter about your application over time. The engineering challenge shifts from pure implementation toward integration and optimization.</p>
<p>Starting small with AI integrations in your current projects is a practical approach for seeing how AI systems can help. Add intelligent features to existing applications. Experiment with AI APIs and services. Build systems that can incorporate AI capabilities without requiring complete rewrites. Each project teaches you more about AI-native patterns, similar to how you&rsquo;d gradually adopt any new technology stack.</p>
<h2 id="developing-ai-collaboration-skills">Developing AI Collaboration Skills</h2>
<p>Learning to work effectively with AI as a thinking partner goes beyond using AI tools. You&rsquo;re developing a collaborative workflow where AI augments your problem-solving process rather than just automating tasks.</p>
<p>This means getting good at prompt engineering, but more importantly, learning to structure problems <a href="/posts/i-spent-78-learning-why-bash-still-matters-in-the-ai-age/">and code</a> in ways that AI can help with effectively. Some problems benefit from AI&rsquo;s pattern recognition capabilities. Others need AI&rsquo;s ability to generate and evaluate multiple approaches quickly. Understanding when and how to use these capabilities makes you more effective.</p>
<blockquote>
<p>Good engineers know when to ask colleagues for help, how to frame problems clearly, and which team members bring the right expertise to different challenges. Working with AI requires similar social and communication skills.</p>
</blockquote>
<p>It&rsquo;s also critical to develop good judgment about AI outputs. AI can generate impressive solutions that miss important constraints or edge cases. Engineers who can quickly evaluate AI suggestions, identify potential issues, and iterate toward better solutions will consistently outperform those who either avoid AI entirely or accept its outputs uncritically.</p>
<p>This mirrors how you&rsquo;d work with any collaborator—trusting their expertise while applying your own judgment, asking clarifying questions, and building on their contributions with your own insights and context.</p>
<h2 id="positioning-for-long-term-success">Positioning for Long-Term Success</h2>
<p>Engineers who thrive long-term will view AI as a force multiplier for their existing strengths rather than a replacement for their role. If you&rsquo;re great at system design, AI can help you explore more architectural options. If you excel at debugging, AI can help you identify patterns across larger codebases. If you&rsquo;re skilled at optimization, AI can help you analyze performance bottlenecks more comprehensively.</p>
<p>The strategic approach involves doubling down on your strengths while developing AI collaboration skills that amplify them. You don&rsquo;t need to become an AI researcher unless that&rsquo;s your passion. Instead, become expert at applying AI to the problems you already enjoy solving.</p>
<p>This also means staying close to the business impact of your work. Engineers who understand how their technical decisions affect user experience, business metrics, and organizational goals will always be valuable, regardless of how AI capabilities evolve. The technology might change, but the need for good judgment about what to build and how to build it remains constant.</p>
<h2 id="the-compound-advantage-of-early-adoption">The Compound Advantage of Early Adoption</h2>
<p>Engineers who start developing AI collaboration skills now will have years of experience when these capabilities become standard across the industry. This includes technical knowledge and intuition for when AI helps and when it doesn&rsquo;t, understanding failure modes, and building robust workflows around AI capabilities.</p>
<p>Start with AI tools that augment your current workflow—code completion, documentation generation, test writing. Gradually expand to more complex collaborations like architectural design, system optimization, and problem analysis. Each interaction teaches you more about effective AI collaboration.</p>
<p>Just like learning to work well with any new team member, the key is consistent practice and honest feedback. Try different approaches, see what works, and gradually build more sophisticated collaborative patterns.</p>
<p>The goal is becoming highly effective at leveraging AI rather than becoming dependent on it. Engineers with this skill set will consistently deliver better results faster than those working without AI augmentation. As AI capabilities improve, this advantage compounds.</p>
<blockquote>
<p>The future belongs to engineers who see AI as an opportunity to tackle harder problems, build better systems, and have greater impact. Instead of competing with AI, they&rsquo;re collaborating with it to push the boundaries of what&rsquo;s possible in software engineering.</p>
</blockquote>
<p>The best engineers have always been force multipliers—they make everyone around them more effective. AI gives these engineers a new kind of leverage. Instead of just amplifying the capabilities of their teams, they can amplify their own problem-solving abilities and tackle challenges that were previously beyond reach.</p>
<p>The practices that make you great at working with AI—clear communication, structured thinking, productive collaboration, and sound judgment—are the same practices that make you great at working with people. Master these fundamentals, and you&rsquo;ll thrive regardless of how the technology landscape evolves.</p>
 ]]></content:encoded></item><item><title>From Problem Solver to Problem Solver Creator</title><link>https://victoria.dev/posts/from-problem-solver-to-problem-solver-creator/</link><pubDate>Tue, 24 Jun 2025 13:06:44 +0000</pubDate><author>hello@victoria.dev (Victoria Drake)</author><guid>https://victoria.dev/posts/from-problem-solver-to-problem-solver-creator/</guid><description>Multiply your impact: Transform from solving problems to creating problem solvers. Engineering leaders' guide to building autonomous, capable teams.</description><content:encoded><![CDATA[ <p>The question that changed everything for me was simple: “What if instead of being the person who solves problems, I became the person who creates problem solvers?” It sounds obvious in retrospect, but the shift from solving to enabling requires completely rewiring how you think about getting work done.</p>
<p>For years, my value came from being able to debug the trickiest issues, architect complex systems, and untangle the technical problems that had everyone else stuck. I was fast, thorough, and reliable. But in a leadership role, continuing to be the primary problem solver wasn’t scaling—it was becoming a bottleneck.</p>
<p>I realized that every problem I solved myself, though satisfying, was a missed opportunity to develop someone else’s problem-solving capabilities. Instead of asking “How can I fix this?” I started asking “Who could learn the most from figuring this out, and how can I set them up for success?”</p>
<p>This mindset shift transforms everything about how you approach engineering leadership. Instead of optimizing for immediate solutions, you optimize for building a team that can tackle increasingly complex challenges independently. Here’s what that looks like in practice.</p>
<h2 id="teaching-through-ownership-not-tasks">Teaching Through Ownership, Not Tasks</h2>
<p>The difference between assigning tasks and developing problem solvers is the difference between “implement this API endpoint” and “figure out how we should handle user authentication for this new feature.” One teaches someone to follow specifications; the other teaches them to think through trade-offs and business impact, research solutions, and make technical decisions.</p>
<p>Strategic delegation becomes about identifying problems that are slightly beyond someone’s current comfort zone—complex enough to require real thinking, but not so complex that they’ll get stuck without making progress. When we needed to optimize our database performance, instead of diving in myself, I paired our most eager junior backend engineer with our database expert and said, “Figure out why our queries are getting slower and what we should do about it.” (They did an excellent job and both learned new things in the process.)</p>
<blockquote>
<p>The key is providing enough context for good decision-making while resisting the urge to prescribe the solution.</p>
</blockquote>
<p>This means sharing the business constraints, the technical requirements, and the success criteria, then stepping back and letting them work through the problem-solving process. When they hit roadblocks, you guide them toward resources and approaches rather than answers.</p>
<p>What you’re really doing is teaching people to ask the right questions: What are we optimizing for? What are the constraints? What could go wrong? How will we know if it’s working? These thinking patterns transfer to every future problem they encounter.</p>
<h2 id="building-problem-solving-muscle-through-learning">Building Problem-Solving Muscle Through Learning</h2>
<p>The best problem solvers aren’t necessarily the ones who know the most—they’re the ones who are best at learning what they need to know. Creating a culture where continuous learning is expected and supported turns every project into an opportunity to develop new problem-solving capabilities.</p>
<p>This means structuring work so that people regularly encounter unfamiliar challenges with appropriate support systems. When someone expresses curiosity about machine learning, performance optimization, or distributed systems, find ways to connect that interest to real problems your team needs to solve. The developer who wants to understand ML can take point on improving your recommendation algorithm. The engineer curious about performance can lead the investigation into why your app feels sluggish.</p>
<p>Internal knowledge sharing amplifies this effect. Regular deep-dive sessions where team members present problems they’ve solved create a library of problem-solving approaches that everyone can learn from. But more importantly, the act of sharing forces people to articulate their thinking process, which helps them develop more systematic approaches to future problems.</p>
<p>The compound effect is remarkable. Teams that prioritize learning consistently punch above their weight because they’re better at recognizing patterns, adapting to new situations, and breaking down complex problems into manageable pieces.</p>
<h2 id="communication-that-enables-independent-thinking">Communication That Enables Independent Thinking</h2>
<p>The goal of communication in leadership isn’t just clarity—it’s creating the conditions where people can make good decisions without constantly checking in with you. This means providing not just what was decided, but the reasoning behind decisions, the factors that were considered, and the principles that guide similar situations.</p>
<blockquote>
<p>When you share context richly, you’re teaching people to think through problems the way you would, but with their own insights and perspectives.</p>
</blockquote>
<p>Instead of saying “use Redis for caching,” explain why caching is needed, what alternatives were considered, what trade-offs matter, and how to evaluate whether it’s working. Now when similar performance problems arise, they have a framework for thinking through solutions.</p>
<p>One-on-ones become especially valuable for developing problem-solving skills. These conversations are where you can understand how someone approaches challenges, what assumptions they’re making, and where their thinking might benefit from different perspectives. Often, the most helpful thing you can do is ask questions that help them think through problems more systematically.</p>
<p>The ultimate goal is asynchronous problem-solving—people having enough context and judgment to tackle new challenges without waiting for direction. When that happens, your team’s problem-solving capacity isn’t limited by your bandwidth.</p>
<h2 id="identifying-and-developing-natural-problem-solving-styles">Identifying and Developing Natural Problem-Solving Styles</h2>
<p>Every engineer has a natural approach to problem-solving, but not everyone has had the opportunity to develop and refine that approach. Part of creating problem solvers is recognizing these natural inclinations and providing opportunities to strengthen them.</p>
<p>Some people are naturally systematic—they break down complex problems into smaller pieces and work through them methodically. Others are more intuitive—they see patterns and connections that aren’t immediately obvious. Some are great at asking the right questions to clarify requirements. Others excel at considering edge cases and potential failures.</p>
<p>The key is matching people with problems that play to their strengths while gradually expanding their toolkit. Let the systematic thinker lead the database migration planning. Give the pattern-recognizer the tricky debugging challenge. Ask the question-asker to work with product managers on requirement gathering.</p>
<p>But also create opportunities for people to develop complementary skills. Pair the intuitive problem solver with someone more methodical. Have the detail-oriented engineer work on a project that requires big-picture thinking. These collaborations teach people new approaches while solving real problems.</p>
<p>Leadership development happens naturally when people get comfortable with their own problem-solving style and learn to facilitate problem-solving in others.</p>
<h2 id="removing-obstacles-to-problem-solving-growth">Removing Obstacles to Problem-Solving Growth</h2>
<p>The biggest barriers to developing problem solvers are often systemic rather than individual. People can’t develop good judgment if they don’t have access to the information they need to make decisions. They can’t learn from mistakes if the environment punishes experimentation. They can’t tackle complex problems if they’re constantly interrupted by urgent but low-value work.</p>
<blockquote>
<p>Your role becomes creating the conditions where problem-solving skills can develop naturally.</p>
</blockquote>
<p>This often means advocating upward for better tools, more reasonable deadlines, or clearer priorities. It means protecting your team’s focus time and ensuring they have access to the resources they need to dive deep into problems.</p>
<p>Sometimes it’s about facilitating conversations between teams so your engineers can get the context they need to make good technical decisions. Sometimes it’s about negotiating for technical debt time so people can practice the long-term thinking that prevents problems rather than just solving them reactively.</p>
<p>The most important obstacle to remove is the fear of making mistakes. Problem-solving skills develop through experimentation, and experimentation requires an environment where intelligent failures are treated as learning opportunities rather than performance problems.</p>
<h2 id="the-multiplier-effect">The Multiplier Effect</h2>
<p>What makes this approach so rewarding is that the impact compounds exponentially. A team of capable problem solvers doesn’t just solve more problems—they solve harder problems, prevent problems through better design, and create solutions that other teams can build on.</p>
<p>When you develop someone’s problem-solving abilities, you’re not just helping them with their current role. You’re giving them tools they’ll use throughout their career, whether they stay individual contributors or move into leadership themselves. The engineer who learns to think systematically about performance problems becomes someone who designs performant systems from the start.</p>
<p>The ripple effects extend beyond your immediate team. Problem solvers become mentors. They raise the bar in technical discussions. They ask better questions in design reviews. They contribute to a culture where good technical decision-making is normal rather than exceptional.</p>
<blockquote>
<p>This is the ultimate lever in engineering leadership: instead of solving problems yourself, you create the conditions where great solutions emerge naturally from your team.</p>
</blockquote>
<p>Instead of being the bottleneck, you become the catalyst that makes everything else work better.</p>
<p>The transition from solving problems to creating problem solvers is challenging because it requires patience and faith in other people’s potential. But when you see someone tackle a problem that would have stumped them six months ago, or when your team consistently delivers solutions that surprise you with their thoughtfulness, you realize you’ve built something much more valuable than any individual technical contribution: a system that continuously generates great technical work.</p>
 ]]></content:encoded></item><item><title>I Spent $78 Learning Why Bash Still Matters in the AI Age</title><link>https://victoria.dev/posts/i-spent-78-learning-why-bash-still-matters-in-the-ai-age/</link><pubDate>Sun, 15 Jun 2025 14:09:42 +0000</pubDate><author>hello@victoria.dev (Victoria Drake)</author><guid>https://victoria.dev/posts/i-spent-78-learning-why-bash-still-matters-in-the-ai-age/</guid><description>Why Bash beats AI for bulk operations: $78 lesson in choosing the right tool. Engineering leaders' guide to balancing AI with command-line fundamentals.</description><content:encoded><![CDATA[ <p>Here’s how a little laziness cost me $78.</p>
<p>While working on a personal project recently, I wanted Cline to process about a hundred files that were each in subdirectories of a project. I fired up Cline and picked Gemini 2.5 Pro (context window FTW) and asked it to recurse through the subdirectories, process the files, and put the results in a new file.</p>
<p>Cline got to work… slowly. I watched as the “API Request…” spinner appeared for each file read and each time it saved the results. About twenty minutes and $26 later, it finished.</p>
<p>Okay, I thought, that’s not great, but not untenable. The cost of convenience, right? I opened up the results file to take a look and.. <em>sigh</em>. Not great work. It was obvious that some files had been skipped despite my very careful instructions to process each and every one.</p>
<p>So, like a glutton for punishment, I made a list of the files Cline had skipped and asked it to try again. Tired of babysitting, I raised the “Maximum Request Auto Approval” limit to more than I thought would be needed to finish processing the files that were left, and went to take a coffee break.</p>
<p>When I came back, Cline was done. The results? Still not great. Files had still been skipped, some files that were processed were missing results, and, oh, my task bill had risen to $78.</p>
<p>Okay, <em>this</em> was untenable. Reading all this data into context was costly and slow.</p>
<p>Then the coffee started to kick in, I guess, because it dawned on me: why in the world was I using expensive API calls to do something a Bash one-liner could do?</p>
<blockquote>
<p>&ldquo;Cline, write a Bash command that will recurse through the <code>data/</code> directory and obtain the content of all the files and copy it into a single new file.&rdquo;</p>
</blockquote>
<p>Which produced:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>find data/ -type f -exec cat <span style="color:#f92672">{}</span> + &gt; all_data.txt
</span></span></code></pre></div><p>This command:</p>
<ul>
<li><code>find data/</code> - searches recursively in the <code>data</code> directory.</li>
<li><code>-type f</code> - specifies that we&rsquo;re looking for files only (not directories, links, etc.).</li>
<li><code>-exec cat {} +</code> - for all files found, execute the <code>cat</code> command. The <code>{}</code> is a placeholder for the filename, and the <code>+</code> is a crucial optimization that groups multiple filenames into a single <code>cat</code> command, avoiding the overhead of launching a new process for every single file.</li>
<li><code>&gt; all_data.txt</code> - redirects the standard output of the <code>cat</code> command (which is the concatenated content of all the files) into a new file named <code>all_data.txt</code>.</li>
</ul>
<p>Then I asked Cline to read the resulting <code>all_data.txt</code> file, process it, and output the results.</p>
<p>It took about two minutes.</p>
<p>And it cost me $0.78.</p>
<h2 id="what-just-happened">What just happened?</h2>
<p>My initial naive approach had accidentally created a perfect storm of computational inefficiency.</p>
<p>When Cline processed each file individually, it was making separate API calls for every single operation - reads, writes, the works. With about 100 files, that meant roughly 200+ API calls, each one spinning up its own network round-trip with all the latency that entails. Every time I saw that “API Request…” spinner, I was watching money float away into the ether.</p>
<p>But here’s the kicker: large language models like Gemini charge based on token consumption.</p>
<blockquote>
<p>It’s not just the file content they’re charging for; every single API call also included the entire conversation history, system prompts, and my instructions.</p>
</blockquote>
<p>With a stateless API, that context has to be re-transmitted with every single request. If my average context was around 10,000 tokens and I made 200 calls, I burned through 2 million tokens (10,000 * 200) on overhead alone, before even counting the actual data.</p>
<p>Combining all the files with bash flipped this whole equation on its head. Instead of 200 API calls, I made exactly one. Instead of bearing the network latency for every file operation, combining the files locally on my machine meant the filesystem could actually optimize that work. What had taken almost an hour of network round-trips for Gemini to access all the data was reduced to a couple hundred milliseconds of local file operations.</p>
<h2 id="the-expensive-lesson-in-algorithmic-thinking">The expensive lesson in algorithmic thinking</h2>
<p>This whole debacle reminded me why understanding the cost model of your tools matters just as much as understanding their capabilities. API pricing is designed around per-request and per-token charges, which naturally punishes fine-grained operations. It’s similar to how databases are optimized for bulk operations rather than processing individual rows - the overhead of each transaction quickly becomes the bottleneck.</p>
<p>My first approach had O(n) complexity for API calls, where n equals the number of files. The bash solution reduced that to O(1) by batching everything locally first. That’s the difference between linear scaling and constant cost, and at $78, I felt every bit of that mathematical distinction.</p>
<p>There’s also something to be said about data locality here. My original method couldn’t take advantage of any local caching or filesystem optimizations. Every operation had to go over the network to an API server, get processed, and come back. The bash approach kept everything local until the very end, letting my machine’s filesystem cache work its magic.</p>
<h2 id="the-real-cost-of-convenience">The real cost of convenience</h2>
<p>I’d fallen into the trap of thinking that because I <em>could</em> use an AI tool for everything, I <em>should</em> use it for everything. But there’s a difference between leveraging AI for tasks that require intelligence and using it as an expensive replacement for basic system utilities.</p>
<p>The irony is that I probably spent more mental energy managing and troubleshooting the AI approach than I would have just thinking through the problem for five minutes and reaching for the right tool from the start. Sometimes the most sophisticated solution is knowing when to employ a basic tool.</p>
<p>My little bit of laziness bought me a $78 lesson that boils down to this: always understand the economic model of your tools, especially when they’re priced per operation. The most elegant and cost-effective solution isn’t always the newest and most technically exciting one.</p>
 ]]></content:encoded></item><item><title>Create Better Code Documentation 10x Faster with AI</title><link>https://victoria.dev/posts/create-better-code-documentation-10x-faster-with-ai/</link><pubDate>Tue, 27 Aug 2024 13:55:47 +0000</pubDate><author>hello@victoria.dev (Victoria Drake)</author><guid>https://victoria.dev/posts/create-better-code-documentation-10x-faster-with-ai/</guid><description>AI-powered documentation that developers actually use. Transform docs from chore to superpower with prompts for onboarding and incident response guides.</description><content:encoded><![CDATA[ <p>Documentation has always been one of those “we should do this” tasks that somehow never makes it to the top of the sprint. But what if creating comprehensive, useful documentation could be as straightforward as explaining your code to a colleague?</p>
<p>Conversational AI has changed the game entirely. Instead of starting with a blank page and trying to remember every detail a new team member might need, you can have AI help you think through the process systematically. The result isn’t just better docs—it’s documentation that actually serves your team’s needs as you grow and evolve.</p>
<p>Here’s how to use AI to build documentation that scales with your team and genuinely improves how you work together.</p>
<h2 id="documentation-that-welcomes-new-team-members">Documentation That Welcomes New Team Members</h2>
<p>The best part about using AI for documentation is that it naturally thinks from an outsider’s perspective. While you and your team already understand your system’s quirks and design decisions, AI starts fresh every time—much like a new hire would.</p>
<p>Most conversational AI tools allow you to upload code files or paste code snippets. You can then use prompts that help surface the knowledge your team takes for granted:</p>
<pre tabindex="0"><code>Write documentation for a new software engineer joining our team. Assume they’re experienced but know nothing about our specific domain, architecture decisions, or business logic. Include the “why” behind non-obvious technical choices and flag anything that might seem strange or unexpected to an outside developer.
</code></pre><p>This approach reveals the implicit knowledge that experienced team members forget to document—why certain patterns exist, what alternatives were considered, and where the potential gotchas are. It transforms documentation from a chore into a useful onboarding tool that actually reduces the time senior developers spend answering questions.</p>
<p>To create comprehensive documentation you can use immediately, provide the AI with additional context such as:</p>
<ul>
<li>What the application does and who uses it</li>
<li>Key architectural decisions and their reasoning</li>
<li>Setup and deployment processes</li>
<li>Integration points with other systems</li>
<li>Common troubleshooting scenarios</li>
</ul>
<p>Your role becomes reviewing and refining rather than writing from scratch—which is often the difference between documentation that gets done and documentation that gets skipped.</p>
<h2 id="operational-documentation-that-actually-helps">Operational Documentation That Actually Helps</h2>
<p>One of the most valuable types of documentation is also the most overlooked: information organized for when things go wrong. During incidents, you need answers fast, not comprehensive explanations.</p>
<p>AI excels at creating focused, actionable documentation because you can specify exactly what situation you’re optimizing for:</p>
<pre tabindex="0"><code>Create incident response documentation for this codebase. Focus on: 1) How to quickly identify what component is failing, 2) Common failure modes and their symptoms, 3) Step-by-step debugging workflows, 4) Who to contact for different types of issues. Write this as if the person reading it is stressed, tired, and needs answers in under 5 minutes.
</code></pre><p>This type of documentation serves a completely different purpose than your standard README or API docs. It’s designed for when your most knowledgeable developers aren’t available and someone needs to resolve an issue quickly.</p>
<p>The beauty of AI-generated operational docs is that they’re naturally structured for scan-ability rather than linear reading—exactly what you need during high-pressure situations.</p>
<h2 id="capturing-institutional-knowledge">Capturing Institutional Knowledge</h2>
<p>Here’s where AI really shines: helping you identify and document the knowledge that exists only in people’s heads. This institutional knowledge is often the difference between a change that takes 30 minutes and one that takes 3 hours of debugging.</p>
<p>You can surface these knowledge gaps by asking AI to analyze your code from a risk perspective:</p>
<pre tabindex="0"><code>Analyze this code and identify areas where domain knowledge or business context would be critical for modification. What would a developer need to know about our business, users, or regulatory requirements to safely change this code? What assumptions about data, timing, or external systems are embedded here?
</code></pre><p>For inline documentation, you can focus on the business logic and integration points that aren’t obvious from the code itself:</p>
<pre tabindex="0"><code>Add inline documentation to this code file without changing any of the code. Focus on documenting business logic, data assumptions, and integration points that wouldn’t be obvious to someone unfamiliar with our domain.
</code></pre><p>This process often improves the code itself—explaining your logic to AI sometimes reveals opportunities for clearer naming, better structure, or simplified approaches.</p>
<h2 id="making-documentation-a-team-superpower">Making Documentation a Team Superpower</h2>
<p>The real opportunity here isn’t just better individual documentation—it’s democratizing the ability to create good documentation across your entire team. Developers who previously avoided writing docs because they didn’t know where to start now have a collaborative partner to help structure their thoughts.</p>
<ol>
<li><strong>Start with high-impact documentation</strong>: Focus on onboarding guides and operational runbooks first. These provide immediate value and create positive momentum around documentation practices.</li>
<li><strong>Use AI to improve existing docs</strong>: You can ask AI to review and improve documentation you already have, suggesting missing information or better organization.</li>
<li><strong>Make it iterative</strong>: Documentation doesn’t need to be perfect on the first pass. Use AI to create initial drafts that you can refine based on team feedback and real usage patterns.</li>
<li><strong>Leverage different formats</strong>: AI can help create everything from README files to inline comments to architectural decision records, adapting the style and depth based on the audience and purpose.</li>
</ol>
<h2 id="practical-tips-for-better-results">Practical Tips for Better Results</h2>
<p>When working with AI to create documentation, providing context about the intended audience and use case dramatically improves the output. Explain not just what the code does, but who will be using the documentation and in what situations.</p>
<p>For complex codebases, you might get better results by working with smaller sections and then asking AI to help you organize everything into a coherent structure. Many AI tools can also provide downloadable files if you specify that in your prompt, which saves time on longer documents.</p>
<p>The goal isn’t to replace human judgment in documentation—it’s to remove the barriers that prevent good documentation from getting written in the first place. AI handles the initial structure and comprehensive coverage, while you focus on accuracy, team-specific context, and ensuring the documentation actually serves your workflows.</p>
<p>Good documentation transforms how teams work together. It reduces interruptions, accelerates onboarding, and creates resilience when key team members aren’t available. With AI handling the heavy lifting of initial creation, maintaining comprehensive documentation becomes achievable rather than aspirational.</p>
<p>Your future team members (and your future self during the next production incident) will definitely appreciate the investment.</p>
 ]]></content:encoded></item><item><title>Mastering Git for Small Teams</title><link>https://victoria.dev/posts/mastering-git-for-small-teams/</link><pubDate>Mon, 28 Feb 2022 06:37:48 -0600</pubDate><author>hello@victoria.dev (Victoria Drake)</author><guid>https://victoria.dev/posts/mastering-git-for-small-teams/</guid><description>Simple Git workflow that prevents merge conflicts and speeds deployment. Small teams guide to branch management that actually works in practice.</description><content:encoded>
&lt;img src="https://victoria.dev/posts/mastering-git-for-small-teams/cover_hu2c7b131ca42731bc004a5709524962fe_15416_640x0_resize_box_3.png" width="640" height="235"/><![CDATA[ <p>I&rsquo;ve watched too many talented engineers spend their Friday afternoons untangling Git messes that could have been avoided with a simpler workflow. You know the scene: someone&rsquo;s trying to merge a three-week-old feature branch, there are conflicts in files that haven&rsquo;t been touched in months, and suddenly what should have been a five-minute deployment turns into a two-hour debugging session (with the whole team).</p>
<p>The solution isn&rsquo;t mastering Git&rsquo;s most obscure commands or memorizing every branching strategy ever invented. It&rsquo;s adopting a workflow that prevents the chaos in the first place. Here&rsquo;s the approach I use personally and recommend for small teams that want to ship code without the drama.</p>
<h2 id="a-protected-main-branch-no-exceptions">A Protected Main Branch (No Exceptions)</h2>
<p>First rule: no human should have direct push permissions to your <code>master</code> branch. Ever. I don&rsquo;t care if you&rsquo;re the CTO, the person who started the repository, or the only one who &ldquo;really understands the codebase.&rdquo; The moment you start making exceptions is the moment you start breaking things in production.</p>
<p>Your main branch should be your source of truth for what&rsquo;s currently deployed. When you create a release from the latest tag, that code should work. Period. If you&rsquo;re not deploying frequently and automatically, you&rsquo;re missing out on one of the biggest advantages of this approach.</p>
<h2 id="one-issue-one-branch-one-pr-keep-it-simple">One Issue, One Branch, One PR (Keep It Simple)</h2>
<p>Here&rsquo;s where most teams overcomplicate things. You&rsquo;ve got your issues tracked somewhere (and if you don&rsquo;t, we need to have a different conversation). Each issue represents a well-defined piece of work that can be merged and deployed without breaking anything. Maybe it&rsquo;s a new feature, a component update, or a bug fix. Doesn&rsquo;t matter—the process stays the same.</p>
<figure><img src="/posts/mastering-git-for-small-teams/cover.png"><figcaption>
      <h4>Author&#39;s illustration of issue branches and releases from master.</h4>
    </figcaption>
</figure>

<p>The key is keeping branches short-lived. For a small commercial team, we&rsquo;re talking days, not weeks. Open source projects with volunteer contributors might stretch this to a few weeks or months, but the principle remains: finish the work, get it reviewed, merge it, and move on.</p>
<p>Here&rsquo;s what this looks like in practice. Say you&rsquo;re working on <strong>(#28) Add user settings page</strong>:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-sh" data-lang="sh"><span style="display:flex;"><span><span style="color:#75715e"># Get all the latest work locally</span>
</span></span><span style="display:flex;"><span>git checkout master
</span></span><span style="display:flex;"><span>git pull
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Start your new branch from master</span>
</span></span><span style="display:flex;"><span>git checkout -b 28/add-settings-page
</span></span></code></pre></div><p>Work on the issue, and periodically merge <code>master</code> to stay current:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-sh" data-lang="sh"><span style="display:flex;"><span><span style="color:#75715e"># Commit to your issue branch</span>
</span></span><span style="display:flex;"><span>git commit ...
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Get the latest work on master</span>
</span></span><span style="display:flex;"><span>git checkout master
</span></span><span style="display:flex;"><span>git pull
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Return to your issue branch and merge in master</span>
</span></span><span style="display:flex;"><span>git checkout 28/add-settings-page
</span></span><span style="display:flex;"><span>git merge master
</span></span></code></pre></div><p>I know some of you are thinking &ldquo;but what about rebasing?&rdquo; Look, I like rebasing too. A clean, linear history is beautiful. But I&rsquo;ve seen too many developers get tangled up in interactive rebasing purgatory while accidentally dropping commits or creating conflicts that didn&rsquo;t need to exist. Merging might create a slightly messier history, but it&rsquo;s predictable and reversible. When you&rsquo;re optimizing for team productivity, predictable wins over pretty.</p>
<p>When your work is ready, open a PR against <code>master</code>. Tests run automatically. Your teammates review the code and leave helpful feedback (hopefully). Maybe you deploy a preview version to staging. Once everything looks good, merge it, close the issue, and delete the branch. (Yes. Delete it. It will be okay.)</p>
<h2 id="avoiding-the-common-disasters">Avoiding the Common Disasters</h2>
<p>Here are the patterns I see that turn this simple workflow into a nightmare:</p>
<p><strong>Branching off feature branches:</strong> This is how you end up with dependency chains that make merging feel like getting the Christmas lights out of storage. Someone starts working on feature B before feature A is merged, then feature C depends on both, and suddenly you need a whiteboard and a computer science degree to figure out the merge order. Just branch from the latest <code>master</code>. Always.</p>
<p><strong>Scope creep on branches:</strong> You&rsquo;re implementing the user settings page, but then you notice the button component could use some updates, and hey, while we&rsquo;re at it, let&rsquo;s refactor this entire authentication flow. Stop. That&rsquo;s how a three-day task becomes a three-week (or three-month) PR that nobody wants to review. Stick to the issue at hand.</p>
<p><strong>Keeping dead branches around:</strong> Your branch got merged last month, but it&rsquo;s still sitting there in the repository like a ghost haunting your Git history. Delete merged branches immediately. Future you will thank present you for not having to scroll through fifty old feature branches trying to find the one you&rsquo;re actually working on.</p>
<h2 id="why-this-actually-works">Why This Actually Works</h2>
<p>This workflow works because it aligns with how small teams actually operate. You don&rsquo;t need the complexity of GitFlow when you&rsquo;ve got eight developers. You don&rsquo;t need long-lived release branches when you&rsquo;re deploying multiple times per week. You need a system that gets out of your way and lets you focus on building software.</p>
<p>The protection on <code>master</code> means your deployable code stays deployable. The one-issue-per-branch rule keeps PRs reviewable and prevents feature creep. The short-lived branches mean conflicts are small and manageable. The regular merging from <code>master</code> means you catch integration issues early when they&rsquo;re easy to fix.</p>
<p>Most importantly, <strong>this workflow is boring in the best possible way.</strong> Once your team gets the hang of it, Git becomes background infrastructure instead of a daily source of stress. Developers stop losing work to merge conflicts. Code reviews become focused discussions about functionality rather than archaeology expeditions through weeks of accumulated changes.</p>
<p>The best development workflows are the ones you don&rsquo;t have to think about. They handle the routine stuff automatically so you can focus on the interesting problems. This Git strategy does exactly that—it gets out of your way and lets you ship code with confidence.</p>
 ]]></content:encoded><enclosure url="https://victoria.dev/posts/mastering-git-for-small-teams/cover_hu2c7b131ca42731bc004a5709524962fe_15416_640x0_resize_box_3.png" length="15428" type="image/jpg"/></item><item><title>The Doorway Problem: Why Building in Isolation Fails</title><link>https://victoria.dev/posts/the-doorway-problem-why-building-in-isolation-fails/</link><pubDate>Mon, 09 Aug 2021 03:17:49 +0000</pubDate><author>hello@victoria.dev (Victoria Drake)</author><guid>https://victoria.dev/posts/the-doorway-problem-why-building-in-isolation-fails/</guid><description>Why building in isolation guarantees failure. Engineering leaders' guide to iterative development that ships working features by staying connected to context.</description><content:encoded>
&lt;img src="https://victoria.dev/posts/the-doorway-problem-why-building-in-isolation-fails/cover_hub1af5a9c7fddee063e9fde9e2292d1fd_1203319_640x0_resize_q75_box.jpg" width="640" height="466"/><![CDATA[ <p>It’s a comedy classic—you’ve got a grand idea. Maybe you want to build a beautiful new dining room table. You spend hours researching woodcraft, learn about types of wood and varnish, explore different styles of construction, and now you have a solid plan. You buy the wood and other materials. You set up in the garage. For months you measure and saw, sand, hammer and paint. Finally, the effort pays off. The table is finished, and it’s fantastic.</p>
<p>In a frenzy of accomplishment you drag it into the house—only to discover that your dining room doorway is several inches too small. It doesn’t fit.</p>
<p>You might say this comedic example is unrealistic. Of course an experienced DIY-er would have measured the doorway first. But in real life, unforeseen problems rarely come solo. Once you finally get the table through the door (after removing the legs and reassembling it inside), you discover the floor’s uneven. The chairs you chose are a few inches too short. The ceiling light hangs too low. Each solution creates new problems you never anticipated.</p>
<p>I’ve seen this exact pattern play out dozens of times in software development, just with different furniture. Teams spend months building features in isolation, only to discover they don’t fit through the “doorways” of real user workflows, existing infrastructure, or business constraints. The solution isn’t better planning—it’s building in context from the start.</p>
<h2 id="the-planning-fallacy-or-why-were-all-terrible-at-this">The Planning Fallacy (Or: Why We’re All Terrible at This)</h2>
<p>Few software developers are accurate when it comes to time and cost estimates. This isn’t a failing of engineers specifically—it’s a deeply human tendency toward optimism when predicting our own future. First proposed by Daniel Kahneman and Amos Tversky in 1979, the planning fallacy explains why our estimates are consistently wrong.</p>
<p>In one study, students were asked to estimate how long they’d take to finish their senior theses. The estimates averaged 27.4 days at the optimistic end and 48.6 days at the pessimistic end. The actual completion time? 55.5 days. Even the pessimistic estimates were too optimistic.</p>
<p>The researchers proposed two main reasons: first, people focus on their future plans rather than their past experiences; second, people don’t think past experiences matter much to the future anyway.</p>
<p>You can probably find examples of this in your own recent project history. Sure, that last “two-day feature” turned into a two-week affair, but that was only because the API documentation was wrong. Or maybe you didn’t finish that database migration when planned, but that was only because you discovered the staging environment was configured differently than production. You’re absolutely, positively, definitely certain that next time will be different.</p>
<blockquote>
<p>The reality is that we’re terrible at factoring in the unexpected daily demands of building software.</p>
</blockquote>
<p>Legacy code behaves mysteriously. Third-party services have undocumented quirks. Staging environments don’t match production. Users do things we never anticipated. Some measure of ignorance about these complications probably keeps us sane enough to start new projects.</p>
<p>But some measure of accurate planning is also necessary for success. The solution is working in context as much as possible, rather than trying to plan for every contingency.</p>
<h2 id="context-is-your-reality-check">Context Is Your Reality Check</h2>
<p>Let’s reconsider the dining room table story. Instead of spending months out in the garage, what would you do differently to build in context?</p>
<p>You might say, “Build it in the dining room!” While that would be ideal for context, it’s rarely possible in homes or software development. Instead, you do the next best thing: start building, and make frequent visits to context.</p>
<p>Having decided you want to build a table, one of the first questions is “How big will it be?” You’ll have requirements to fulfill (must seat six, must match other furniture, must hold the weight of your annual twenty-eight-course Christmas feast) that lead you to a rough decision.</p>
<p>With a size in mind, you build a mock-up. At this point, the specific materials, style, and color don’t matter—only the three dimensions. Once you have your mock table, you can make your first trip to the context where it will ultimately live. Attempting to carry your foam/wood/cardboard/balloon animal mock-up into the dining room will reveal issues you never considered, and possibly new opportunities as well. Perhaps, though you’d never have thought it, a modern abstractly-shaped dining table would better complement the space. You can take this into account in your next higher-fidelity iteration.</p>
<p>This translates directly to software development, minus the Christmas feast. You may recognize this as the MVP approach, but even here, putting the MVP in context is a step that’s frequently omitted.</p>
<p>I’ve seen teams spend months building a “simple” user authentication system, only to discover that their company’s SSO provider doesn’t support the OAuth flow they built around. Or teams that create beautiful interfaces that completely break when real user data (with its inconsistent formats and edge cases) gets loaded. Where will your product ultimately live? How will it be accessed? What does real data look like?</p>
<blockquote>
<p>Building your MVP and attempting to deploy it with realistic constraints will uncover these issues when they’re still manageable.</p>
</blockquote>
<p>Even when teams have prior experience with technologies, remember the planning fallacy. People naturally discount past evidence to the point of forgetting. It’s also unlikely that the same exact team is building the same exact product as last time. The language, technology, framework, and infrastructure have likely changed—as have the capabilities and bandwidth of the engineers. Frequent visits to context help you run into issues early, adapt to them, and create short feedback loops.</p>
<h2 id="go-for-good-enough-then-iterate">Go for Good Enough (Then Iterate)</h2>
<p>The specific meaning of putting something in context varies from project to project. It might mean deploying to cloud infrastructure, running on a new server, or testing whether your remote office can access the same resources you use. In all cases, keep those short iterations going. Don’t wait to get a version to 100% before finding out if it works in context. Ship it at 80%, see how close you got, then iterate.</p>
<p>This approach feels risky if you’re used to planning everything upfront. But the alternative—discovering fundamental incompatibilities after months of work—is much riskier. Better to learn that your table won’t fit through the door when it’s still made of cardboard than when it’s solid oak.</p>
<p>The best software gets built by teams that understand the difference between the theoretical problem they’re solving and the real environment where their solution needs to work. Context is messy, unpredictable, and full of constraints you never anticipated. That’s exactly why you need to visit it early and often.</p>
<p>Your garage is perfect for focused work, but your dining room is where people actually eat dinner. Build for where your software will really live, not where it’s convenient to develop it.</p>
 ]]></content:encoded><enclosure url="https://victoria.dev/posts/the-doorway-problem-why-building-in-isolation-fails/cover_hub1af5a9c7fddee063e9fde9e2292d1fd_1203319_640x0_resize_q75_box.jpg" length="39539" type="image/jpg"/></item><item><title>How to Think Like a Hacker (And Why Your Team Should Too)</title><link>https://victoria.dev/posts/how-to-think-like-a-hacker-and-why-your-team-should-too/</link><pubDate>Tue, 27 Jul 2021 04:26:26 -0400</pubDate><author>hello@victoria.dev (Victoria Drake)</author><guid>https://victoria.dev/posts/how-to-think-like-a-hacker-and-why-your-team-should-too/</guid><description>Build security-conscious teams by thinking like hackers. Engineering leaders' guide to systematic skepticism that prevents vulnerabilities before they exist.</description><content:encoded><![CDATA[ <p>The most effective security-minded developers I know share one trait: they’re professionally suspicious of their own assumptions. They look at a form field and wonder what happens if someone tries to enter something unexpected. They design an API endpoint and ask how someone might misuse it. They have a systematic curiosity about how systems behave versus how they’re supposed to behave.</p>
<p>I saw this firsthand while working with a team where questioning assumptions became a regular part of our code review process. We’d look at every new feature and ask “How might someone abuse this?” I developed a particular talent for finding injection attacks on forms—apparently I have a knack for thinking of creative ways to sneak SQL queries into text fields. After the third or fourth time I caught these vulnerabilities during review, we added validation middleware to eliminate that entire class of problems.</p>
<p>But the real breakthrough was watching how the team’s thinking evolved. Once developers got used to questioning their assumptions about user behavior, they started writing more robust solutions from the start. Security thinking became a starting point rather than something bolted on afterward.</p>
<h2 id="designing-for-reality-not-just-intent">Designing for Reality, Not Just Intent</h2>
<p>One of the most effective practices we developed was specifying both the “happy path” and the “unhappy path” during our design process. The happy path was straightforward—everything happens in the way and sequence we intended. But the unhappy paths were where we learned the most: what happens when steps occur out of order? When data is missing or provided in an unexpected format? When external systems fail at exactly the wrong moment?</p>
<p>This dual-path thinking transformed how we approached every feature. Instead of just asking “How should this work?” we started asking “How will this actually be used?” and “What should happen when reality doesn’t match our expectations?” It sounds pessimistic, but it actually made development more fun. It caused us to think about our application from all angles rather than just implementing obvious functionality.</p>
<p>The unhappy path exercise revealed assumptions we didn’t even know we were making. We’d design a user registration flow assuming people would fill out forms completely and submit them once. Then we’d consider reality: What if someone submits the form multiple times? What if they navigate away and come back? What if they fill out the form, wait an hour, then submit it after their session expires?</p>
<p>Each unhappy path scenario led to better design decisions. Race condition handling. Idempotent endpoints. Graceful degradation when external services are unavailable. The code that protected against malicious users also handled legitimate users experiencing network glitches or browser crashes.</p>
<h2 id="systematic-questioning-as-a-superpower">Systematic Questioning as a Superpower</h2>
<p>There’s a particular mindset that effective security thinking requires—call it systematic skepticism. It’s the ability to look at any system and ask “What assumptions is this making?” and “What happens when those assumptions are wrong?” This kind of thinking makes your software more robust.</p>
<p>Sometimes this means channeling your inner four-year-old—pushing every button, ignoring all instructions, using things in ways their makers never intended. But rather than random exploration, you develop structured ways of challenging system boundaries, finding edge cases, and being creative about the ways that software can be used beyond its intended purpose.</p>
<p>This systematic questioning makes you better at every aspect of development. When you’re used to thinking about edge cases and unexpected inputs, you write more defensive code naturally. When you habitually consider what could go wrong, you build better (more useful) error handling. When you assume users will do unexpected things, you design more intuitive interfaces.</p>
<p>I’ve noticed that developers who adopt this questioning mindset become significantly better at debugging production issues too. Instead of being surprised when something breaks, they’re already thinking “What unexpected condition triggered this?” They approach problems with methodical curiosity rather than frustrated confusion.</p>
<h2 id="building-a-culture-of-constructive-skepticism">Building a Culture of Constructive Skepticism</h2>
<p>The key to building security-conscious teams isn’t teaching people to be afraid of attackers—it’s helping them develop genuine curiosity about system behavior under stress. When questioning assumptions becomes intellectually interesting rather than anxiety-inducing, your team will start doing it automatically.</p>
<p>Code reviews become more engaging when everyone is looking for unspoken assumptions about user behavior. Feature planning gets more thorough when “What are the unhappy paths?” is a standard question alongside “What should it do?” Architecture discussions become more robust when you’re considering not just how systems should work together, but how they should behave when dependencies are slow, unavailable, or returning unexpected data.</p>
<p>The practical implementation is surprisingly straightforward. During development, encourage your team to spend time being deliberately unreasonable with whatever they’re building. During design reviews, spend equal time on happy and unhappy paths. During testing, encourage your team to think like someone who’s never seen your application before and doesn’t understand the rules.</p>
<p>What emerges is a team that builds more resilient systems without extra effort. When you’re accustomed to thinking about failure modes, you naturally design systems that handle them gracefully. When you expect users to ignore instructions, you build interfaces that guide them toward success even when they’re not following the intended flow.</p>
<h2 id="security-as-engineering-excellence">Security as Engineering Excellence</h2>
<p>What I’ve learned is that security thinking is really just rigorous engineering thinking with a creative twist. It’s the same mental process you use when debugging complex issues or designing APIs that won’t confuse future developers. You’re considering multiple perspectives, anticipating edge cases, and designing for resilience rather than just functionality.</p>
<p>The most successful security-conscious teams I’ve worked with don’t have dedicated security experts who review everything after the fact—they have developers who think about security implications as naturally as they think about performance or usability. This happens through cultural reinforcement and consistent practice, not through mandates or compliance checklists.</p>
<p>The payoff extends far beyond security. Teams that think about unhappy paths build more reliable software. Developers who consider malicious inputs write better input validation for legitimate users. Engineers who design for system failures create more robust integrations. The skills reinforce each other in ways that make everyone more effective.</p>
<p>Most importantly, this approach makes engineering work more intellectually satisfying. There’s something deeply rewarding about anticipating problems and solving them before they happen. When your team develops the habit of systematically questioning their assumptions, they’ll approach every problem with the kind of methodical curiosity that leads to truly robust solutions.</p>
<p>You can help your team become professionally curious about system boundaries, failure modes, and the gap between how software is supposed to work and how it actually gets used. Once they develop that mindset, they’ll write more secure code naturally, because they’ll view software the same way attackers do—as systems that can fail when someone does something unexpected.</p>
 ]]></content:encoded></item><item><title>Do I Raise or Return Errors in Python?</title><link>https://victoria.dev/posts/do-i-raise-or-return-errors-in-python/</link><pubDate>Tue, 09 Feb 2021 05:34:48 -0500</pubDate><author>hello@victoria.dev (Victoria Drake)</author><guid>https://victoria.dev/posts/do-i-raise-or-return-errors-in-python/</guid><description>Python error handling: When to raise exceptions vs return errors. Principal engineer's guide to building debuggable applications that fail predictably.</description><content:encoded><![CDATA[ <p>I’ve been writing Python for nearly a decade, and this question still comes up in code reviews more often than you’d think. Should I raise an exception or return an error value? It seems simple on the surface, but the choice ripples through your entire codebase in ways that can make or break your team’s productivity six months down the line.</p>
<h2 id="the-real-question-behind-the-question">The Real Question Behind the Question</h2>
<p>When your function discovers something’s wrong, you’re not only choosing between <code>raise</code> and <code>return</code>. You’re making a decision about how your entire application will handle failure, how readable your code will be for the next person, and how many 3 AM prod debugging sessions you’re setting up for your future self and team.</p>
<p>Here&rsquo;s how I think about this choice, because the right one for your application affects everything from your error logs to your team’s velocity.</p>
<h2 id="when-i-reach-for-exceptions">When I Reach for Exceptions</h2>
<p>I raise exceptions when something genuinely unexpected happens—when the assumptions my function was built on just got violated. If I’m writing a function to parse a config file and the file doesn’t exist, that’s exceptional. The caller expected a valid config, and I can’t deliver on that contract.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#66d9ef">def</span> <span style="color:#a6e22e">load_config</span>(filepath):
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">if</span> <span style="color:#f92672">not</span> os<span style="color:#f92672">.</span>path<span style="color:#f92672">.</span>exists(filepath):
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">raise</span> <span style="color:#a6e22e">FileNotFoundError</span>(<span style="color:#e6db74">f</span><span style="color:#e6db74">&#34;Config file not found: </span><span style="color:#e6db74">{</span>filepath<span style="color:#e6db74">}</span><span style="color:#e6db74">&#34;</span>)
</span></span><span style="display:flex;"><span>    
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">try</span>:
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">with</span> open(filepath, <span style="color:#e6db74">&#39;r&#39;</span>) <span style="color:#66d9ef">as</span> f:
</span></span><span style="display:flex;"><span>            <span style="color:#66d9ef">return</span> json<span style="color:#f92672">.</span>load(f)
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">except</span> json<span style="color:#f92672">.</span>JSONDecodeError <span style="color:#66d9ef">as</span> e:
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">raise</span> ConfigurationError(<span style="color:#e6db74">f</span><span style="color:#e6db74">&#34;Invalid JSON in config file: </span><span style="color:#e6db74">{</span>e<span style="color:#e6db74">}</span><span style="color:#e6db74">&#34;</span>)
</span></span></code></pre></div><p>Here’s why exceptions work well here: the calling code doesn’t need to check every single operation. If any step fails, the exception bubbles up to whoever can actually handle it. Your main application logic stays clean, and error handling happens at the right level.</p>
<p>The business impact here is huge. When your core logic isn’t cluttered with error checking, you can focus on the actual problem you’re solving. Your functions do one thing well, and your error handling is centralized where it belongs.</p>
<h2 id="when-i-return-error-values">When I Return Error Values</h2>
<p>But sometimes the “error” isn’t really an error—it’s just one of several possible outcomes. When I’m building a user search function, finding zero results isn’t exceptional. It’s totally normal behavior that the caller needs to handle anyway.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#66d9ef">def</span> <span style="color:#a6e22e">search_users</span>(query):
</span></span><span style="display:flex;"><span>    results <span style="color:#f92672">=</span> database<span style="color:#f92672">.</span>search(query)
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">if</span> <span style="color:#f92672">not</span> results:
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">return</span> []  <span style="color:#75715e"># Empty list, not an exception</span>
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">return</span> results
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Calling code feels natural</span>
</span></span><span style="display:flex;"><span>users <span style="color:#f92672">=</span> search_users(<span style="color:#e6db74">&#34;john&#34;</span>)
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">if</span> users:
</span></span><span style="display:flex;"><span>    display_users(users)
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">else</span>:
</span></span><span style="display:flex;"><span>    show_no_results_message()
</span></span></code></pre></div><p>This approach shines when you have multiple valid outcomes and the caller needs to make decisions based on which one occurred. It also works well for performance-critical code where exception handling overhead matters.</p>
<h2 id="the-type-safety-angle">The Type Safety Angle</h2>
<p>Here’s something I’ve started caring about more as codebases grow: how well does your choice play with static type checking? Modern Python with type hints changes the game significantly.</p>
<p>With exceptions, your function signature stays clean:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#66d9ef">def</span> <span style="color:#a6e22e">parse_user_id</span>(user_input: str) <span style="color:#f92672">-&gt;</span> int:
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">try</span>:
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">return</span> int(user_input)
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">except</span> <span style="color:#a6e22e">ValueError</span>:
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">raise</span> InvalidUserIdError(<span style="color:#e6db74">&#34;User ID must be a number&#34;</span>)
</span></span></code></pre></div><p>But with return values, you’re often dealing with unions:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#66d9ef">def</span> <span style="color:#a6e22e">parse_user_id</span>(user_input: str) <span style="color:#f92672">-&gt;</span> int <span style="color:#f92672">|</span> <span style="color:#66d9ef">None</span>:
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">try</span>:
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">return</span> int(user_input)
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">except</span> <span style="color:#a6e22e">ValueError</span>:
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">return</span> <span style="color:#66d9ef">None</span>
</span></span></code></pre></div><p>That <code>| None</code> propagates through your entire codebase. Every function that calls this one now has to handle the None case, and mypy will remind you of that fact. Sometimes that’s exactly what you want—explicit error handling at every level. Other times, it creates unnecessary complexity.</p>
<h2 id="the-performance-reality-check">The Performance Reality Check</h2>
<p>What about performance? Yes, exceptions are slower than returning values, but context matters enormously here.</p>
<p>In tight loops processing thousands of items per second, that overhead can add up. Profiling code where you&rsquo;ve switched from exceptions to return values might show improved performance of 20-30%. But in typical web application code where you’re dealing with database calls and network requests, exception overhead is noise compared to everything else.</p>
<p>The more important performance consideration is often developer performance. How quickly can someone understand your code? How easily can they modify it without introducing bugs? I’ve seen teams spend weeks debugging subtle issues that wouldn’t have existed with clearer (documented!) error handling patterns.</p>
<h2 id="patterns-that-actually-work-in-production">Patterns That Actually Work in Production</h2>
<p>After working on systems that handle millions of requests, here are the patterns I keep coming back to:</p>
<p><strong>For library code</strong>: Raise exceptions. Libraries don’t know how their callers want to handle errors, so push that decision up the stack. Custom exception types help callers decide what to catch and what to let bubble up.</p>
<p><strong>For user input validation</strong>: Usually return structured error information. Users make mistakes constantly, and that’s normal behavior, not exceptional.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#66d9ef">def</span> <span style="color:#a6e22e">validate_email</span>(email: str) <span style="color:#f92672">-&gt;</span> ValidationResult:
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">if</span> <span style="color:#f92672">not</span> email:
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">return</span> ValidationResult(valid<span style="color:#f92672">=</span><span style="color:#66d9ef">False</span>, error<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Email is required&#34;</span>)
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">if</span> <span style="color:#e6db74">&#34;@&#34;</span> <span style="color:#f92672">not</span> <span style="color:#f92672">in</span> email:
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">return</span> ValidationResult(valid<span style="color:#f92672">=</span><span style="color:#66d9ef">False</span>, error<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Invalid email format&#34;</span>)
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">return</span> ValidationResult(valid<span style="color:#f92672">=</span><span style="color:#66d9ef">True</span>)
</span></span></code></pre></div><p><strong>For external service calls</strong>: This is tricky. Network timeouts and service errors happen, but they’re not exactly exceptional in a distributed system. I often use exceptions for the truly unexpected (DNS resolution failures) and return values for the predictable failures (rate limiting, temporary service unavailability).</p>
<h2 id="the-3am-system-down-test">The 3AM System Down Test</h2>
<p>Here’s my ultimate thought experiment test: if something broke in production and you had to debug it at 3 AM, bleary-eyed and chugging coffee, which approach helps you understand what went wrong faster?</p>
<p>Good exceptions with detailed error messages and proper stack traces are incredible for this. You can see exactly where things went wrong and why. But exceptions that get swallowed or re-raised without context are debugging nightmares.</p>
<p>Return values with proper logging can also be great for debugging, especially when you need to understand the sequence of events that led to a problem. But they require more discipline—you need to actually check and log those return values.</p>
<h2 id="making-the-choice">Making the Choice</h2>
<p>When I’m looking at a specific function, I ask myself:</p>
<ul>
<li>Is this condition truly unexpected given the function’s contract?</li>
<li>Do callers need to make immediate decisions based on this failure?</li>
<li>How will this pattern scale across my team and codebase?</li>
<li>What will debugging look like when this inevitably breaks?</li>
</ul>
<p>There’s no universal right answer, but there are patterns that work well for different situations. The key is being intentional about your choice and consistent within your codebase.</p>
<p>Your error handling strategy affects how quickly new team members can contribute, how easy it is to track down production issues, and how confident you can be when making changes. Choose patterns that serve your team’s long-term productivity, not just today’s immediate problem.</p>
<p>The best choice for error handling is the one that helps you sleep better at night, knowing that when something goes wrong, you’ll be able to figure out what happened and fix it quickly.</p>
<p>If you found some value in this post, there&rsquo;s more! I write about high-output development processes and building maintainable systems in the AI age. You can get my posts in your inbox by subscribing below.</p>
<a href="https://medium.com/@victoriadotdev/subscribe" target="_blank" rel="noopener noreferrer" class="subscribe-button">
    <span class="subscribe-icon">📧</span>
    <span class="subscribe-text">Subscribe</span>
</a>

 ]]></content:encoded></item><item><title>What Tech Leaders Do Before Going on Vacation</title><link>https://victoria.dev/posts/what-tech-leaders-do-before-going-on-vacation/</link><pubDate>Mon, 01 Feb 2021 04:02:54 -0600</pubDate><author>hello@victoria.dev (Victoria Drake)</author><guid>https://victoria.dev/posts/what-tech-leaders-do-before-going-on-vacation/</guid><description>Engineering leaders' vacation prep checklist: Build team autonomy before you go. Turn time off into opportunities for delegation and team development.</description><content:encoded>
&lt;img src="https://victoria.dev/posts/what-tech-leaders-do-before-going-on-vacation/cover_hu31bd96803ecae7e728e2cc09de17142f_355394_640x0_resize_box_3.png" width="640" height="320"/><![CDATA[ <p>Early in my career, I worked on a team where the CEO decided to take two weeks off without much preparation. By the middle of the first week, people had “run out” of things to do. Not because there wasn’t work—there was plenty—but because no one knew what they were supposed to prioritize, who could make decisions, or how to move forward on anything that required input from leadership.</p>
<p>We spent those two weeks in a weird organizational limbo, working on whatever seemed important while bigger decisions piled up. Upon returning, the CEO was frustrated that so little had been accomplished, and the team was frustrated that they’d been left without clear direction. It was a perfect example of how taking time off as a leader requires completely different preparation than taking time off as an individual contributor.</p>
<p>The reality is that leadership vacation planning isn’t about finishing your own work—it’s about ensuring your team can function effectively without you. Done well, it’s actually a powerful way to develop your team’s autonomy and decision-making capabilities. Done poorly, it creates exactly the kind of organizational dysfunction I witnessed firsthand.</p>
<h2 id="the-information-bottleneck-problem">The Information Bottleneck Problem</h2>
<p>Here’s what most leaders don’t realize: you’re probably a bigger bottleneck than you think. Not because you’re micromanaging, but because critical context lives in your head that your team needs access to in order to make good decisions. The challenge isn’t documenting everything you know—that’s impossible. The challenge is identifying what your team will actually need while you’re gone.</p>
<p>I’ve learned to approach this systematically. Instead of trying to dump all my knowledge, I focus on the specific work my team will be doing during my absence. What decisions might come up that I can provide context for? What blockers could they encounter and who could help in my absence? Who will take the lead on making decisions to help keep projects moving forward?</p>
<p>This exercise often reveals gaps in team communication that extend beyond vacation planning.</p>
<blockquote>
<p>If people don’t know how to prioritize work when you’re gone for a week, they probably struggle with prioritization day-to-day more than you realize.</p>
</blockquote>
<p>Vacation prep becomes a forcing function for better ongoing delegation.</p>
<p>The practical approach is straightforward: review your priority list and write down the context and contacts that your team will need to get work done while you&rsquo;re away. But the deeper value is discovering where your team needs more autonomy and decision-making authority in general.</p>
<h2 id="decision-making-without-you">Decision-Making Without You</h2>
<p>The most common mistake I see leaders make is trying to pre-decide everything that might come up while they’re away. This is both impossible and counterproductive. Instead, the goal should be empowering your team to make good decisions using the same framework you would use.</p>
<p>Before any significant time off, I have explicit conversations with my team about what kinds of decisions they can make independently and what should wait for my return. More importantly, I explain the reasoning behind those boundaries so they understand when to escalate and when to proceed.</p>
<p>More than just being on the same page, these boundaries help to build your team’s confidence in their own judgment.</p>
<blockquote>
<p>When people understand your decision-making criteria and feel trusted to apply them, they’ll make better choices whether you’re away on vacation or away in a meeting.</p>
</blockquote>
<p>The key is being specific about decision authority rather than vague about “checking with me first.” Instead of saying “let me know if anything important comes up,” try “you can approve any engineering changes that don’t affect the database schema, but flag anything that requires downtime for discussion when I’m back.”</p>
<h2 id="creating-clarity-not-chaos">Creating Clarity, Not Chaos</h2>
<p>The difference between teams that thrive when their leader is away and teams that stagnate comes down to clarity of expectations. Your team needs to know not just what to work on, but how to make trade-offs when priorities conflict, who to go to for different types of help, and what success looks like in your absence.</p>
<p>I’ve found that internal communication about your time off is just as important as external auto-responders.</p>
<blockquote>
<p>A quick message to your team explaining where to find information, who’s covering what responsibilities, and how to handle common scenarios prevents a lot of confusion and hesitation.</p>
</blockquote>
<p>But the real test is whether your team feels empowered to act or feels like they’re in caretaker mode until you return. The goal is maintaining momentum, not just maintaining the status quo. This requires trusting your team with meaningful work and giving them the context they need to handle unexpected situations.</p>
<h2 id="the-leadership-development-opportunity">The Leadership Development Opportunity</h2>
<p>Your vacation is actually a development opportunity for your team if you set it up intentionally.</p>
<blockquote>
<p>When you step back temporarily, you create space for other people to step up, make decisions, and take on leadership responsibilities.</p>
</blockquote>
<p>Instead of just hoping things will be fine while you’re gone, use your absence as a chance to test and develop your team’s capabilities. Give someone the opportunity to run meetings, handle stakeholder communication, or make technical decisions that they’re ready for but haven’t had the chance to practice.</p>
<p>The preparation for this kind of delegation is more involved than just finishing your own work, but the payoff is enormous. You return to a team that’s more capable and confident, and you’ve identified who’s ready for additional responsibilities. Plus, you’ve stress-tested your team’s ability to function without you, which is valuable information for organizational resilience.</p>
<h2 id="making-time-off-actually-restful">Making Time Off Actually Restful</h2>
<p>The irony of leadership is that taking time off can be stressful if you’re worried about what’s happening while you’re away. The best vacation preparation eliminates that anxiety by ensuring your team has everything they need to succeed without you.</p>
<p>This means being honest about your availability expectations and sticking to them. If you tell your team you’ll be completely offline, don’t check Slack “just once” and end up getting pulled into work discussions. If you’re going to check in periodically, be specific about when and how, so people know what to expect.</p>
<p>The teams that handle leadership time off best are the ones where this kind of preparation is routine, not exceptional. When delegation, clear communication, and decision-making authority are part of your regular management practice, preparing for vacation becomes straightforward rather than stressful.</p>
<p>Your time off should leave your team more capable, not less. When you return from vacation to find that your team tackled challenges, made good decisions, and maintained momentum without you, you’ll know you’ve built something sustainable. That’s not just good vacation planning—it’s good leadership.</p>
 ]]></content:encoded><enclosure url="https://victoria.dev/posts/what-tech-leaders-do-before-going-on-vacation/cover_hu31bd96803ecae7e728e2cc09de17142f_355394_640x0_resize_box_3.png" length="47660" type="image/jpg"/></item><item><title>How to Choose a Great Tech Hire</title><link>https://victoria.dev/posts/how-to-choose-a-great-tech-hire/</link><pubDate>Tue, 12 Jan 2021 05:50:53 -0500</pubDate><author>hello@victoria.dev (Victoria Drake)</author><guid>https://victoria.dev/posts/how-to-choose-a-great-tech-hire/</guid><description>Hire engineers who thrive: Look beyond algorithms to find builders. Engineering leaders' guide to identifying talent that stays and delivers real value.</description><content:encoded><![CDATA[ <p>I’ve seen too many hiring processes that focus on the wrong things. Teams spend hours on algorithm puzzles and whiteboard exercises, then hire someone who can’t write readable code or collaborate effectively with colleagues. Six months later, they’re dealing with either a performance issue or an unexpected resignation from someone who never felt like they fit the team.</p>
<p>These candidates don&rsquo;t lack technical ability. The problem is that traditional hiring processes don’t predict who will actually succeed and stay on your team. After years of hiring engineers and watching some thrive while others struggle, I’ve learned that the best predictors of long-term success are often the things most interviews completely miss.</p>
<p>Here’s what I actually look for when hiring engineers, and why these signals matter more than most technical assessments.</p>
<h2 id="look-for-builders-not-just-coders">Look for Builders, Not Just Coders</h2>
<p>The question that matters most isn’t “Can they solve algorithm problems?” It’s “Can they build things that solve problems?” There’s a fundamental difference between someone who can write code and someone who can deliver working software that serves a purpose.</p>
<p>When I review candidates, I’m looking for evidence that they’ve built complete projects from start to finish. Not just coding exercises or tutorial follow-alongs, but actual working software that solves real problems. This could be command-line utilities, web applications, automation tools, or contributions to open source projects—the complexity matters less than the completeness.</p>
<p>What I’m really evaluating is their ability to navigate the full software development lifecycle. Can they scope a problem, make technical decisions, handle edge cases, write documentation, and ship something that actually works? These are the skills that translate directly to success on your team, regardless of whether they learned them in a computer science program or taught themselves on weekends.</p>
<p>The best candidates can walk you through their projects and explain not just how they built something, but why they made specific technical choices. They understand the trade-offs they made and can articulate what they learned from the experience. This kind of thinking is what distinguishes engineers who will contribute meaningfully to your team from those who will struggle to move beyond assigned tasks.</p>
<h2 id="evaluate-systems-thinking-over-syntax-knowledge">Evaluate Systems Thinking Over Syntax Knowledge</h2>
<p>Most technical interviews focus on whether someone knows specific syntax or can solve isolated problems. But the engineers who succeed on teams are the ones who understand how their code fits into larger systems and affects other people’s work.</p>
<p>I look for candidates who demonstrate awareness of follow-on effects. When they describe a project, do they consider performance implications? Do they think about maintainability? Can they explain how their technical decisions might impact other developers or users?</p>
<p>Understanding concepts like mutability, thread safety, and code reusability shows technical competence as well as thinking systematically about software as something that exists in a larger context. Engineers who grasp these concepts naturally write code that’s easier to debug, extend, and maintain. They consider the total cost of ownership, not just the immediate implementation.</p>
<p>During interviews, I ask candidates to explain technical trade-offs they’ve made in their projects. The specific technologies matter less than their ability to reason about complexity, performance, and maintainability. Engineers who think this way will continue learning and adapting as your company&rsquo;s tech stack evolves.</p>
<h2 id="assess-communication-skills-through-real-examples">Assess Communication Skills Through Real Examples</h2>
<p>Communication skills aren’t just a “nice to have” for engineers—they’re essential for team effectiveness. But most hiring processes assess communication through artificial interview scenarios rather than looking at how candidates actually communicate about technical topics.</p>
<p>I spend significant time reviewing candidates’ written communication. How do they explain their projects in README files? How do they participate in open source discussions? Can they write clear, helpful documentation? These examples reveal how they’ll communicate with your team when explaining technical decisions, documenting systems, or participating in code reviews.</p>
<p>Pay attention to how candidates describe complex technical concepts during interviews. Can they adjust their explanation based on their audience’s technical background? Do they provide context and examples? Can they acknowledge when they don’t know something without becoming defensive?</p>
<p>The engineers who succeed long-term are those who can collaborate effectively across different skill levels and backgrounds. They can explain technical concepts to non-technical stakeholders, provide helpful code review feedback, and contribute to architectural discussions. These collaborative skills are often better predictors of success than pure technical ability.</p>
<h2 id="identify-team-players-through-contribution-patterns">Identify Team Players Through Contribution Patterns</h2>
<p>The best predictor of how someone will behave on your team is how they’ve behaved on other teams. Rather than asking hypothetical questions about teamwork, look at concrete examples of how candidates have collaborated with others.</p>
<p>Open source contributions provide excellent insight into someone’s collaborative style. How do they handle feedback on their code? Do they contribute thoughtfully to discussions? Can they work within existing conventions and standards? Do they help other contributors or just focus on their own work?</p>
<p>For candidates without extensive open source history, look at how they talk about past team experiences. Do they credit others for successes? Can they describe situations where they helped colleagues or learned from feedback? How do they handle disagreement or conflict?</p>
<p>I’m particularly interested in candidates who show evidence of helping others grow. Engineers who mentor junior developers, contribute to team documentation, or improve development processes tend to have a positive impact that extends far beyond their individual contributions.</p>
<h2 id="evaluate-learning-ability-over-current-knowledge">Evaluate Learning Ability Over Current Knowledge</h2>
<p>Technology changes rapidly, which means the specific skills someone has today matter less than their ability to acquire new skills as needed. The engineers who thrive long-term are those who stay curious and adapt effectively to new challenges.</p>
<p>During interviews, I ask candidates about times they had to learn something completely new for a project. How did they approach unfamiliar technologies? What resources did they use? How did they validate their understanding? The process they describe reveals more about their potential than any specific technology they currently know.</p>
<p>I also look for evidence of intellectual humility. Can candidates acknowledge the limits of their knowledge? Do they ask thoughtful questions? Are they excited about learning from more experienced team members? Engineers who combine confidence in their abilities with openness to learning tend to grow quickly and integrate well with existing teams.</p>
<h2 id="what-this-means-for-your-hiring-process">What This Means for Your Hiring Process</h2>
<p>Identifying these qualities requires a different approach than traditional technical interviews. Instead of algorithm problems, focus on discussing real projects and technical decisions. Instead of whiteboard coding, review actual code they’ve written and ask them to explain their thinking.</p>
<p>Spend time on behavioral questions that reveal collaborative patterns and learning ability. Make time for informal conversation about what kind of work environment they thrive in and what they’re excited to learn next.</p>
<p>Most importantly, involve your team in the hiring process. The people who will work directly with your new hire are often better at assessing team fit than individual interviewers making isolated decisions.</p>
<p>Remember that hiring is ultimately about predicting future success, not just evaluating current abilities. The candidates who can build complete projects, think systematically about technical decisions, communicate effectively, and continue learning will contribute more to your team’s long-term success than those who simply perform well on coding tests.</p>
<p>Your perfect candidate isn&rsquo;t necessarily the most technically skilled or the most knowledgeable about your domain. It&rsquo;s the person who will grow with your team and contribute to the kind of collaborative, effective engineering culture that retains great people and delivers great software.</p>
 ]]></content:encoded></item><item><title>Do One Thing: Mastering Prioritization for High-Performing Teams</title><link>https://victoria.dev/posts/do-one-thing-mastering-prioritization-for-high-performing-teams/</link><pubDate>Mon, 07 Dec 2020 15:01:25 -0600</pubDate><author>hello@victoria.dev (Victoria Drake)</author><guid>https://victoria.dev/posts/do-one-thing-mastering-prioritization-for-high-performing-teams/</guid><description>Transform team velocity with single-priority focus. How engineering leaders build autonomous teams that ship faster by doing one thing at a time.</description><content:encoded>
&lt;img src="https://victoria.dev/posts/do-one-thing-mastering-prioritization-for-high-performing-teams/add-resources_hu8e96a7ddde968deaecbc2e8b8444b222_233270_640x0_resize_box_3.png" width="640" height="351"/><![CDATA[ <p>In the engineering teams I lead, “priority” has no plural form. This drives some people slightly crazy, especially those who like to hedge their bets with phrases like “top priorities” or “critical priorities.” But I’ve learned that the moment you allow multiple top priorities, you’ve essentially created zero priorities.</p>
<p>I discovered this the hard way while working with a team that was constantly context-switching between “urgent” projects. Everyone was busy, morale was decent, but we weren’t actually shipping much of value. During one particularly frustrating week, I counted seventeen different tasks that had been labeled as “high priority” by various stakeholders. Our standups felt like disaster reports, and I realized we’d created a system where being busy had become more important than being effective.</p>
<p>The solution turned out to be surprisingly simple, though not easy to implement: put everything into a single, ordered list where only one thing can be most important at any given time.</p>
<h2 id="the-radical-transparency-of-a-central-list">The Radical Transparency of a Central List</h2>
<p>Most teams I’ve encountered operate like a collection of individual to-do lists with some coordination meetings sprinkled on top. Engineering works on technical debt, product pushes for new features, leadership wants infrastructure improvements, and everyone optimizes their own piece of the puzzle. The result is a lot of activity that doesn’t add up to meaningful progress.</p>
<p><img src="prioritize.png" alt="A cartoon of a stick figure swinging on a rope to plant a post-it note"></p>
<p>A single, centralized, prioritized list changes the entire dynamic. Everyone can see what’s actually being worked on, what’s coming next, and most importantly, what’s not getting done and why. This visibility creates natural conversations about trade-offs that simply don’t happen when work is siloed.</p>
<p>I’ve watched teams discover they were working on competing solutions to the same problem, simply because no one had a complete view of active work. Others realized they were delaying important projects because someone assumed “someone else” was handling the dependency. When everything is visible and ordered, these coordination problems become obvious and fixable.</p>
<p>The transparency also creates a different kind of accountability. When priorities are public and explicit, it becomes much harder to justify working on pet projects or avoiding difficult tasks. The list becomes a shared source of truth that guides decisions rather than each person interpreting priorities through their own lens.</p>
<h2 id="autonomy-within-structure">Autonomy Within Structure</h2>
<p>One concern I hear frequently is that a single priority list will turn people into order-takers rather than creative problem-solvers. In practice, I’ve found exactly the opposite happens when you implement it correctly.</p>
<p><img src="task-selection.png" alt="A cartoon of a stick figure climbing a ladder to reach a post-it note"></p>
<p>The key is encouraging people to choose the highest-priority task they can effectively tackle rather than assigning specific tasks to specific people. Someone might skip over the absolute top item because it requires domain knowledge they don’t have, but they can pick up the second or third item that lets them contribute meaningfully while learning something new.</p>
<p>This approach leverages the fact that your team members understand their own capabilities and growth goals better than you do. A senior engineer might choose to mentor a junior developer on a complex task. A frontend specialist might want to tackle a backend task to broaden their skills. These decisions create better outcomes in the long term than top-down task assignment while still maintaining focus on organizational priorities.</p>
<p>The autonomy comes from trusting people to make good decisions about how to contribute most effectively, while the structure comes from ensuring those contributions align with actual business needs.</p>
<h2 id="the-art-of-making-yourself-redundant">The Art of Making Yourself Redundant</h2>
<p>If your team frequently asks you what they should work on next, you’ve accidentally created a bottleneck—and it&rsquo;s you. This is one of the most common scaling problems I see with engineering leaders who transition from individual contributor roles.</p>
<p><img src="add-resources.png" alt="A cartoon of a stick figure carrying books to a wall of post-it notes"></p>
<p>The goal is building a system where intelligent people can make good decisions without constant input from leadership. This requires making context painfully available—team goals, product strategy, architectural decisions, customer feedback, and anything else that influences prioritization should be accessible and current.</p>
<p>I’ve found that the difference between teams that scale smoothly and teams that hit velocity walls usually comes down to how well they’ve documented the reasoning behind decisions. When someone can understand not just what to build but why it matters and how it fits into the larger strategy, they can make smart trade-offs independently.</p>
<p>This redundancy becomes especially critical during high-pressure situations. When systems are down or deadlines are looming, you don’t want your team waiting for permission to take action. Teams that have practiced autonomous decision-making within clear constraints can respond quickly and effectively without requiring heroic coordination efforts.</p>
<h2 id="the-cultural-transformation">The Cultural Transformation</h2>
<p>What surprises most leaders is how much this simple change affects team culture. When priorities are clear and transparent, several things happen that go far beyond improved task management.</p>
<p>First, political conversations about priority disappear. There’s no point in lobbying for your favorite project when the criteria for prioritization are explicit and the current order is visible to everyone. Energy that was spent on organizational maneuvering gets redirected toward actual work.</p>
<p>Second, people start thinking about their contributions differently. Instead of optimizing for individual productivity, they begin considering how their work fits into team objectives. This naturally leads to better collaboration and knowledge sharing.</p>
<p>Third, the team develops a shared sense of progress and momentum. When everyone can see important work getting completed in priority order, it creates a satisfying rhythm that isolated individual achievements can’t match.</p>
<h2 id="implementation-reality">Implementation Reality</h2>
<p>The biggest challenge isn’t creating the list—it’s maintaining the discipline to use it consistently. Teams often start strong but gradually drift back to multiple priority tracks when pressure increases or when compelling new opportunities arise.</p>
<p>I’ve learned to treat priority discipline like any other technical practice that requires ongoing attention. Schedule regular review sessions to reorder the list, have explicit discussions about what we’re choosing not to do, and consistently communicate why keeping a single-priority focus helps maintain development velocity.</p>
<p>The payoff: teams that ship more valuable work with less stress and confusion. When everyone understands what matters most and feels empowered to contribute effectively, both productivity and job satisfaction improve dramatically.</p>
<p>Most importantly, single-priority focus creates sustainable high performance rather than the boom-and-bust cycles that come from constantly shifting between competing urgent demands. Teams learn to work steadily toward important goals rather than reacting to whatever feels most pressing in the moment.</p>
 ]]></content:encoded><enclosure url="https://victoria.dev/posts/do-one-thing-mastering-prioritization-for-high-performing-teams/add-resources_hu8e96a7ddde968deaecbc2e8b8444b222_233270_640x0_resize_box_3.png" length="44050" type="image/jpg"/></item><item><title>The Descent Is Harder Than the Climb</title><link>https://victoria.dev/posts/the-descent-is-harder-than-the-climb/</link><pubDate>Sun, 02 Aug 2020 06:35:45 -0400</pubDate><author>hello@victoria.dev (Victoria Drake)</author><guid>https://victoria.dev/posts/the-descent-is-harder-than-the-climb/</guid><description>Why sustaining success is harder than achieving it. Engineering leadership lessons on preparing teams for the challenges that come after reaching goals.</description><content:encoded><![CDATA[ <p>In 2017, I climbed Mt. Fuji in sneakers. This was not a deliberate choice to increase the challenge—it was the result of excellent research and poor judgment about what that research actually meant.</p>
<p>Everything I&rsquo;d read suggested that Mt. Fuji was the &ldquo;cakewalk of mountain climbing.&rdquo; Physically, the hardest portions amounted to scrambling over some big boulders. Most of the climb was no more taxing than hiking or climbing stairs. Japanese folks in their eighties made the journey for spiritual reasons. There were huts along the way for rest, food, and water. Based on this research, I concluded that sneakers would be perfectly adequate.</p>
<p>The ascent was everything I&rsquo;d been promised. I experienced sights I&rsquo;d never imagined—cities glowing through breaks in clouds from above, walking through paths of grey nothingness where the trail disappeared into cloud cover. Each station marker brought genuine pride and accomplishment. Even the pre-dawn summit queue with 5,000 other climbers, standing in freezing darkness for hours, felt manageable. We reached the summit before sunrise, and it remains one of the most beautiful moments I&rsquo;ve experienced.</p>
<p>Then came the descent. That&rsquo;s where I learned that all the research in the world about reaching goals doesn&rsquo;t prepare you for what comes after you achieve them.</p>
<h2 id="when-success-becomes-the-set-up-for-failure">When Success Becomes the Set Up for Failure</h2>
<p>The descent from Mt. Fuji is essentially a loosely-packed dirt and gravel road on a steep decline. With proper hiking boots and trekking poles, it&rsquo;s probably manageable. In flat-soled street shoes, I fell constantly, and fell hard—every three steps, for hours. I tried to take larger steps; it didn&rsquo;t help. I tried to take smaller steps; that didn&rsquo;t help, either. I tried cunningly to find a way to surf-slide my way down the mountainside and nearly ended up with a mouthful of dirt. As if literally rubbing salt into my wounds, without the gaiters I hadn&rsquo;t brought, sand found its way into my shoes. It was without a doubt the most stupefyingly discouraging experience of my life.</p>
<p>As I picked myself up repeatedly, covered in dirt with scratched elbows, seasoned hikers passed me with ease. Many of them could have been my grandparents, using proper equipment and technique to descend at a steady pace while I struggled and stopped to pour tiny rocks out of my sneakers. The contrast was humbling and instructive.</p>
<p>This experience taught me something crucial about leadership that I&rsquo;ve applied countless times since: the skills and preparation that get you to success are often different from the skills required to maintain or scale that success. The descent is frequently harder than the climb, and most people don&rsquo;t prepare for it adequately.</p>
<h2 id="the-post-achievement-challenge">The Post-Achievement Challenge</h2>
<p>In business and team leadership, I&rsquo;ve watched this pattern repeat consistently. The energy, skills, and resources required to achieve a goal are usually well-understood and planned for. But the challenges that come after success—maintaining market position, scaling team culture, or managing the operational complexity of growth—often catch leaders unprepared.</p>
<p>I&rsquo;ve seen teams that executed brilliant product launches struggle with customer support and maintenance. Startups that successfully raised funding stumble when it comes to executing on their promises to investors. Engineering teams that built innovative solutions fail to create sustainable systems for maintaining and scaling those solutions.
The problem isn&rsquo;t lack of capability—it&rsquo;s that the descent requires different preparation and different skills than the ascent. What gets you to the summit (innovation, speed, breakthrough thinking) often isn&rsquo;t what gets you safely back to basecamp (consistency, processes, systematic execution).</p>
<h1 id="learning-from-those-whove-made-the-journey">Learning from Those Who&rsquo;ve Made the Journey</h1>
<p>Watching those experienced hikers pass me on Mt. Fuji was initially frustrating, but it became one of the most valuable parts of the experience. They had proper equipment, understood the terrain, and moved with confidence that came from experience. Most importantly, they had prepared specifically for the descent, not just the climb.</p>
<p>In leadership roles, I&rsquo;ve learned to actively seek out people who&rsquo;ve successfully navigated the &ldquo;descent&rdquo; phase of challenges I&rsquo;m facing. Entrepreneurs who&rsquo;ve managed hypergrowth. Product managers who&rsquo;ve maintained market leadership over multiple years. Engineering leaders who&rsquo;ve scaled teams from ten to fifty people, or CEOs who’ve scaled companies from fifty to five hundred.</p>
<p>These conversations can reveal patterns you may not have discovered on your own. Successful scaling requires different organizational structures than startup growth. Maintaining team culture during rapid hiring requires intentional systems that don&rsquo;t emerge naturally. Sustaining innovation while managing operational complexity demands new kinds of leadership skills.</p>
<p>People who&rsquo;ve successfully managed the descent often have hard-won wisdom about preparation and technique that isn&rsquo;t captured in most &ldquo;how to reach the summit&rdquo; advice.</p>
<h2 id="building-skills-before-you-need-them">Building Skills Before You Need Them</h2>
<p>The most effective leaders I know prepare for post-success challenges while they&rsquo;re still climbing toward their initial goals. They think systematically about what will be required to maintain and scale whatever they&rsquo;re building, not just achieve it.</p>
<p>This means building operational capabilities alongside product capabilities. Developing team management skills in individual contributors. Creating sustainable processes while you&rsquo;re still in startup mode. Planning for the maintenance and evolution of systems as part of their initial implementation.</p>
<p>It also means recognizing that the mindset and skills that drive breakthrough achievements—risk-taking, speed, creative problem-solving—need to be balanced with different capabilities like consistency, systematic thinking, and process optimization.
I&rsquo;ve learned to explicitly ask: &ldquo;What will success look like, and what challenges will that create?&rdquo; This question reveals preparation gaps that aren&rsquo;t obvious when you&rsquo;re focused entirely on reaching your goals.</p>
<h2 id="when-you-find-yourself-unprepared">When You Find Yourself Unprepared</h2>
<p>Despite best intentions, you&rsquo;ll sometimes find yourself in descent mode without proper preparation—leading a team through unexpected growth, managing a product that succeeded beyond projections, or scaling systems that weren&rsquo;t designed for current loads. The Mt. Fuji experience taught me how to navigate these situations effectively.</p>
<p>First, acknowledge the reality of your situation without wasting energy on regret about preparation gaps. You can&rsquo;t change what you didn&rsquo;t know or plan for previously, but you can adapt your approach based on current conditions. Take the time to solidify new goals in writing, then evaluate whether your efforts are serving them effectively.</p>
<p>Second, focus on learning from people who are managing similar challenges successfully. This isn&rsquo;t the time for pride or trying to figure everything out independently. The hikers who passed me weren&rsquo;t showing off—they had practical knowledge that could help. Conversations you have with others who came before you can save you from a lot of stumbles.</p>
<p>Third, lift your gaze. While the ascent phase requires day-to-day tactical thinking, the descent phase requires a strategic longer-term outlook. Implementing systems and culture that support continued success will require patience, persistence, and often a completely different pace than what got you to the summit. Expecting it to be as expedient as the climb leads to frustration and poor decision-making.</p>
<h2 id="finding-meaning-in-the-difficult-parts">Finding Meaning in the Difficult Parts</h2>
<p>Eventually, I reached the bottom of Mt. Fuji, exhausted and humbled but intact. At a tiny basecamp shop, I ate the most delicious bowl of ramen and the tastiest mountain-shaped sponge cake I&rsquo;ll likely ever have.</p>
<p>Even when you&rsquo;re unprepared and struggling, there&rsquo;s value in the journey itself. The descent taught me lessons about preparation, humility, and persistence that I&rsquo;ve applied to all sorts of challenges for years since.</p>
<h2 id="preparing-for-your-next-descent">Preparing for Your Next Descent</h2>
<p>There is a Japanese proverb: “A wise man will climb Mt Fuji once; a fool will climb Mt Fuji twice.” I suspect this wisdom is based entirely on the difficulty of the descent. But in leadership, you don&rsquo;t get to choose how many times you&rsquo;ll face descent challenges—they&rsquo;re inevitable parts of any significant journey.</p>
<p>The key is recognizing that achieving your goals is often just the beginning of a different kind of challenge. Success creates new problems that require different skills, different preparation, and different mindsets than what got you there initially.</p>
<p>Whether you&rsquo;re building teams, scaling products, or managing organizational growth, prepare for the descent while you&rsquo;re planning the climb. Study what happens after success. Learn from people who&rsquo;ve navigated similar transitions. Build operational capabilities alongside innovative ones.</p>
<p>Most importantly, remember that the descent is still part of the journey, not a failure of the ascent. The challenges that come with success are signs that you&rsquo;ve accomplished something meaningful. Navigate them with patience, preparation, and the understanding that getting back to basecamp safely can be an even more important achievement than reaching the summit.</p>
 ]]></content:encoded></item><item><title>SQLite in Production with WAL</title><link>https://victoria.dev/posts/sqlite-in-production-with-wal/</link><pubDate>Thu, 05 Mar 2020 10:14:43 -0500</pubDate><author>hello@victoria.dev (Victoria Drake)</author><guid>https://victoria.dev/posts/sqlite-in-production-with-wal/</guid><description>SQLite with WAL mode in production: When simple beats complex. Engineering leaders' guide to choosing databases based on actual needs, not industry hype.</description><content:encoded>
&lt;img src="https://victoria.dev/posts/sqlite-in-production-with-wal/cover_hucd0e6312363c308744eec049709d6b2b_994778_640x0_resize_box_3.png" width="640" height="256"/><![CDATA[ <p><em>Update: read the <a href="https://news.ycombinator.com/item?id=27237919">HackerNews discussion</a>.</em></p>
<p>So you need a database. It&rsquo;s going to handle a few hundred users and mostly read operations. Time to set up a PostgreSQL cluster, debate connection pooling strategies, configure replication, and design backup procedures&hellip; right?</p>
<p>When I say, &ldquo;What about SQLite?&rdquo;, the response is usually some variation of &ldquo;That&rsquo;s not a real database.&rdquo;</p>
<p>This reaction reveals something important about how engineering teams make technology decisions. We often choose tools based on what sounds impressive rather than what solves our actual problems. SQLite represents an underappreciated truth in engineering leadership: sometimes the boring, simple solution is exactly what your team needs.</p>
<p><del><a href="https://sqlite.org/index.html">SQLite</a></del> (&ldquo;see-quell-lite&rdquo;) is a lightweight SQL database engine that&rsquo;s self-contained in a single file. It&rsquo;s library, database, and data, all in one package. For certain applications, SQLite is a solid choice for a production database. It&rsquo;s lightweight, ultra-portable, and has no external dependencies.</p>
<h2 id="matching-tools-to-actual-requirements">Matching Tools to Actual Requirements</h2>
<p>As an engineering leader, one of your most important responsibilities is helping your team choose appropriate technology rather than impressive technology. SQLite excels in specific scenarios that are more common than most teams realize.</p>
<p>SQLite is best suited for production use in applications that:</p>
<ul>
<li>Desire fast and simple setup</li>
<li>Require high reliability in a small package</li>
<li>Have, and want to retain, a small footprint</li>
<li>Are read-heavy but not write-heavy</li>
<li>Don&rsquo;t need multiple user accounts or features like multiversion concurrency snapshots</li>
</ul>
<p>These criteria describe a significant percentage of web applications, internal tools, and even customer-facing products. But teams often dismiss SQLite because it doesn&rsquo;t match their mental model of what a &ldquo;serious&rdquo; database looks like.</p>
<p>Recognizing when your team&rsquo;s technology choices are driven by resume-driven development rather than problem-solving can save you oodles of time and budget wiggle room. Complex solutions carry hidden costs in deployment complexity, operational overhead, and cognitive load that simple solutions avoid entirely.</p>
<h2 id="understanding-the-technical-trade-offs">Understanding the Technical Trade-offs</h2>
<p>To guide these decisions effectively, it helps to understand the technical details well enough to evaluate trade-offs intelligently. In the case of SQLite, you can examine its performance characteristics to make this evaluation concrete:</p>
<h3 id="database-transaction-modes">Database Transaction Modes</h3>
<p>POSIX system call fsync() commits buffered data (data saved in the operating system cache) referred to by a specified file descriptor to permanent storage or disk. This is relevant to understanding the difference between SQLite&rsquo;s two modes, as fsync() will block until the device reports the transfer is complete.</p>
<p>SQLite uses <del><a href="https://sqlite.org/atomiccommit.html">atomic commits</a></del> to batch database changes into single transactions, enabling apparent simultaneous writing of multiple operations. This is accomplished through one of two modes: rollback journal or write-ahead log (WAL).</p>
<h3 id="rollback-journal-mode">Rollback Journal Mode</h3>
<p>A <a href="https://www.sqlite.org/lockingv3.html#rollback">rollback journal</a> is essentially a back-up file created by SQLite before write changes occur on a database file. It has the advantage of providing high reliability by helping SQLite restore the database to its original state in case a write operation is compromised during the disk-writing process.</p>
<p>Assuming a cold cache, SQLite first needs to read the relevant pages from a database file before it can write to it. Information is read out into the operating system cache, then transferred into user space. SQLite obtains a reserved lock on the database file, preventing other processes from writing to the database. At this point, other processes may still read from the database.</p>
<p>SQLite creates a separate file, the rollback journal, with the original content of the pages that will be changed. Initially existing in the cache, the rollback journal is written to persistent disk storage with <code>fsync()</code> to enable SQLite to restore the database should its next operations be compromised.</p>
<p>SQLite then obtains an exclusive lock preventing other processes from reading or writing, and writes the page changes to the database file in cache. Since writing to disk is slower than interaction with the cache, writing to disk doesn&rsquo;t occur immediately. The rollback journal continues to exist until changes are safely written to disk with a second <code>fsync()</code>. From a user-space process point of view, the change to the disk (the COMMIT, or end of the transaction) happens instantaneously once the rollback journal is deleted - hence, atomic commits. However, the two <code>fsync()</code> operations required to complete the COMMIT make this option, from a transactional standpoint, slower than SQLite&rsquo;s lesser known WAL mode.</p>
<h3 id="write-ahead-logging-wal">Write-ahead logging (WAL)</h3>
<p>While the rollback journal method uses a separate file to preserve the original database state, the <a href="https://www.sqlite.org/wal.html">WAL method</a> uses a separate WAL file to instead record the changes. Instead of a COMMIT depending on writing changes to disk, a COMMIT in WAL mode occurs when a record of one or more commits is appended to the WAL. This has the advantage of not requiring blocking read or write operations to the database file in order to make a COMMIT, so more transactions can happen concurrently.</p>
<p>WAL mode introduces the concept of the checkpoint, which is when the WAL file is synced to persistent storage before all its transactions are transferred to the database file. You can optionally specify when this occurs, but SQLite provides reasonable defaults. The checkpoint is the WAL version of the atomic commit.</p>
<p>In WAL mode, write transactions are performed faster than in the traditional rollback journal mode. Each transaction involves writing the changes only once to the WAL file instead of twice - to the rollback journal, and then to disk - before the COMMIT signals that the transaction is over.</p>
<p>For teams handling moderate write loads, WAL mode often provides the performance characteristics they actually need without the operational complexity of distributed databases.</p>
<h2 id="the-performance-reality">The Performance Reality</h2>
<p>Benchmarks tell a compelling story about SQLite&rsquo;s practical capabilities. On modest hardware—the smallest EC2 instance with no provisioned IOPS—SQLite with WAL mode handles 400 write transactions per second and thousands of reads. For many applications, this represents more capacity than they need.</p>
<p>These numbers matter because they provide concrete data for technology discussions. Instead of theoretical conversations about &ldquo;what if we need to scale,&rdquo; you can evaluate whether 400 writes per second actually meets your requirements. Often, it does—with significant room for growth.</p>
<p>More importantly, SQLite eliminates entire categories of operational complexity: connection pooling, database server maintenance, backup procedures, replication configuration, and deployment coordination. The operational overhead you don&rsquo;t have to manage often provides more value than the theoretical scalability you might need someday.</p>
<h2 id="making-strategic-technology-decisions">Making Strategic Technology Decisions</h2>
<p>Engineering teams often equate complexity with sophistication and assume that simple solutions won&rsquo;t scale or aren&rsquo;t &ldquo;enterprise-ready&rdquo; without considering the actual requirements of the enterprise. The SQLite decision exemplifies a broader principle in engineering leadership: optimizing for actual constraints rather than imaginary ones. This requires understanding both the technical capabilities of your options and the real requirements of your systems.</p>
<p>This means asking your teams to articulate specific performance requirements, operational constraints, and growth projections rather than making technology choices based on industry trends or resume building. It means evaluating the total cost of ownership including deployment complexity, operational overhead, and team cognitive load.</p>
<p>Most importantly, it means recognizing that the best technology choice is often the one that solves your current problems effectively while remaining simple enough to understand, maintain, and evolve as requirements change.</p>
<h2 id="building-a-culture-of-appropriate-technology">Building a Culture of Appropriate Technology</h2>
<p>Teams that consistently make good technology choices develop systematic approaches to evaluation rather than relying on instinct or industry hype. They start with requirements, evaluate options based on total cost of ownership, and choose solutions that match their actual needs rather than their aspirational ones.</p>
<p>This culture emerges when leaders model technical decision-making that prioritizes problem-solving over impressiveness. When you advocate for SQLite over PostgreSQL because it better matches your workload, you&rsquo;re teaching your team to think critically about technology trade-offs.</p>
<p>The long-term impact is teams that build sustainable systems they can actually maintain and evolve. Simple solutions that solve real problems create more value than complex solutions that solve theoretical ones.</p>
<p>For medium-sized, read-heavy applications, SQLite with WAL mode represents exactly this kind of appropriate technology choice. It provides perfectly adequate capability in a perfectly compact package—which is often exactly what your application needs.</p>
 ]]></content:encoded><enclosure url="https://victoria.dev/posts/sqlite-in-production-with-wal/cover_hucd0e6312363c308744eec049709d6b2b_994778_640x0_resize_box_3.png" length="77170" type="image/jpg"/></item><item><title>17 Minutes to 16 Seconds: a 60x Performance Improvement from… Python?!</title><link>https://victoria.dev/posts/17-minutes-to-16-seconds-a-60x-performance-improvement-from-python/</link><pubDate>Fri, 28 Feb 2020 09:31:02 -0500</pubDate><author>hello@victoria.dev (Victoria Drake)</author><guid>https://victoria.dev/posts/17-minutes-to-16-seconds-a-60x-performance-improvement-from-python/</guid><description>60x Python performance gains through smart multithreading. Engineering leaders' guide to identifying real bottlenecks and optimizing what actually matters.</description><content:encoded>
&lt;img src="https://victoria.dev/posts/17-minutes-to-16-seconds-a-60x-performance-improvement-from-python/cover_hufc3f293689518f785bfb54ed57068ead_360041_640x0_resize_box_3.png" width="640" height="256"/><![CDATA[ <p>Engineering teams will spend weeks optimizing database queries that run in milliseconds while ignoring network requests that take hundreds of milliseconds. They’ll debate the performance implications of different sorting algorithms while their application spends seventeen minutes on network latency to process a few hundred requests. This misallocation of optimization effort reveals a common leadership challenge: helping teams identify and focus on the bottlenecks that actually matter.</p>
<p>When I developed <a href="https://github.com/victoriadrake/hydra-link-checker">Hydra</a>, a multithreaded link checker written in Python, the performance requirements were clear. It needed to run as part of CI/CD processes, which meant speed was essential for developer productivity. Nobody wants to wait seventeen minutes to learn whether their build succeeded—that’s long enough to make coffee, check Twitter, question your career choices, and wonder if the process crashed.</p>
<p>The project became a case study in systematic performance optimization and the leadership decisions that guide technical implementation. Unlike many Python site crawlers that rely on external dependencies like BeautifulSoup, Hydra uses only standard libraries. This constraint required thinking carefully about how to achieve optimal performance within Python’s limitations.</p>
<h2 id="understanding-the-performance-landscape">Understanding the Performance Landscape</h2>
<p>As an engineering leader, one of your most important responsibilities is helping your team understand where performance problems actually occur versus where they assume they occur. Most developers have an intuitive sense that network operations are slower than CPU operations, but the actual magnitude of these differences is staggering.</p>
<p>Here are approximate timings for tasks performed on a typical PC:</p>
<table>
<thead>
<tr>
<th></th>
<th>Task</th>
<th>Time</th>
</tr>
</thead>
<tbody>
<tr>
<td>CPU</td>
<td>execute typical instruction</td>
<td>1/1,000,000,000 sec = 1 nanosec</td>
</tr>
<tr>
<td>CPU</td>
<td>fetch from L1 cache memory</td>
<td>0.5 nanosec</td>
</tr>
<tr>
<td>CPU</td>
<td>branch misprediction</td>
<td>5 nanosec</td>
</tr>
<tr>
<td>CPU</td>
<td>fetch from L2 cache memory</td>
<td>7 nanosec</td>
</tr>
<tr>
<td>RAM</td>
<td>Mutex lock/unlock</td>
<td>25 nanosec</td>
</tr>
<tr>
<td>RAM</td>
<td>fetch from main memory</td>
<td>100 nanosec</td>
</tr>
<tr>
<td>Network</td>
<td>send 2K bytes over 1Gbps network</td>
<td>20,000 nanosec</td>
</tr>
<tr>
<td>RAM</td>
<td>read 1MB sequentially from memory</td>
<td>250,000 nanosec</td>
</tr>
<tr>
<td>Disk</td>
<td>fetch from new disk location (seek)</td>
<td>8,000,000 nanosec   (8ms)</td>
</tr>
<tr>
<td>Disk</td>
<td>read 1MB sequentially from disk</td>
<td>20,000,000 nanosec  (20ms)</td>
</tr>
<tr>
<td>Network</td>
<td>send packet US to Europe and back</td>
<td>150,000,000 nanosec (150ms)</td>
</tr>
</tbody>
</table>
<p>Peter Norvig first published these numbers in <a href="http://norvig.com/21-days.html#answers">Teach Yourself Programming in Ten Years</a>. While hardware continues to evolve, the relative relationships remain humbling for anyone who’s ever spent time optimizing the wrong thing.</p>
<p>Notice that sending a simple packet over the Internet is over a million times slower than fetching from RAM. These aren’t small performance differences—they’re fundamental constraints that should guide every optimization decision you make.</p>
<p>For Hydra, parsing response data and assembling results happens on the CPU and is relatively fast. The overwhelming bottleneck—by over six orders of magnitude—is network latency. Any optimization effort that didn’t address network I/O would miss the point.</p>
<h2 id="working-within-pythons-constraints">Working Within Python’s Constraints</h2>
<p>Python presents an interesting challenge for performance-critical applications. The Global Interpreter Lock (GIL) prevents multiple threads from executing Python bytecodes simultaneously—each thread must wait for the GIL to be released by the currently executing thread. This eliminates race conditions but also prevents true parallel execution of CPU-bound tasks.</p>
<p>For many engineering teams, this limitation becomes a reason to dismiss Python entirely for performance work. But effective technical leadership involves understanding how to work within constraints rather than avoiding tools with limitations.</p>
<p>The key insight is that Python’s GIL limitation doesn’t apply uniformly. While CPU-bound tasks suffer from the GIL, I/O-bound tasks can benefit from concurrent execution because the GIL is released during I/O operations. For Hydra’s use case—fetching web pages over the network—multithreading in Python can provide significant performance improvements despite the GIL.</p>
<p>This distinction matters for strategic technical decisions. Instead of automatically reaching for Go or Rust when performance requirements emerge, understanding Python’s actual constraints can enable better technology choices based on specific workload characteristics.</p>
<h2 id="choosing-the-right-concurrency-model">Choosing the Right Concurrency Model</h2>
<p>Python provides multiple approaches to parallel execution, each suited for different types of bottlenecks. Making the right choice requires understanding both technical trade-offs and your application’s specific performance characteristics.</p>
<h3 id="multiple-processes">Multiple Processes</h3>
<p>Python’s <a href="https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ProcessPoolExecutor"><code>ProcessPoolExecutor</code></a> uses worker subprocesses to bypass the GIL entirely. This approach maximizes parallelization for CPU-bound tasks by utilizing multiple processor cores effectively.</p>
<p>For compute-heavy operations—mathematical calculations, data processing, algorithm execution—multiple processes provide genuine parallel execution. However, this carries overhead costs in memory usage and inter-process communication that may not be justified for I/O-bound workloads.</p>
<h3 id="multiple-threads">Multiple Threads</h3>
<p>Python’s <a href="https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ThreadPoolExecutor"><code>ThreadPoolExecutor</code></a> uses a pool of threads that can execute I/O operations concurrently. While threads can’t execute Python code in parallel due to the GIL, they can perform I/O operations concurrently because the GIL is released during system calls.</p>
<p>For I/O-bound applications—web scraping, API calls, file operations—threading provides excellent performance improvements with lower overhead than multiprocessing.</p>
<h2 id="implementation-strategy">Implementation Strategy</h2>
<p>Here’s how Hydra uses <code>ThreadPoolExecutor</code> to achieve concurrent link checking:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-py" data-lang="py"><span style="display:flex;"><span><span style="color:#75715e"># Create the Checker class</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">class</span> <span style="color:#a6e22e">Checker</span>:
</span></span><span style="display:flex;"><span>    <span style="color:#75715e"># Queue of links to be checked</span>
</span></span><span style="display:flex;"><span>    TO_PROCESS <span style="color:#f92672">=</span> Queue()
</span></span><span style="display:flex;"><span>    <span style="color:#75715e"># Maximum workers to run</span>
</span></span><span style="display:flex;"><span>    THREADS <span style="color:#f92672">=</span> <span style="color:#ae81ff">100</span>
</span></span><span style="display:flex;"><span>    <span style="color:#75715e"># Maximum seconds to wait for HTTP response</span>
</span></span><span style="display:flex;"><span>    TIMEOUT <span style="color:#f92672">=</span> <span style="color:#ae81ff">60</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">def</span> __init__(self, url):
</span></span><span style="display:flex;"><span>        <span style="color:#f92672">...</span>
</span></span><span style="display:flex;"><span>        <span style="color:#75715e"># Create the thread pool</span>
</span></span><span style="display:flex;"><span>        self<span style="color:#f92672">.</span>pool <span style="color:#f92672">=</span> futures<span style="color:#f92672">.</span>ThreadPoolExecutor(max_workers<span style="color:#f92672">=</span>self<span style="color:#f92672">.</span>THREADS)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">def</span> <span style="color:#a6e22e">run</span>(self):
</span></span><span style="display:flex;"><span>    <span style="color:#75715e"># Run until the TO_PROCESS queue is empty</span>
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">while</span> <span style="color:#66d9ef">True</span>:
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">try</span>:
</span></span><span style="display:flex;"><span>            target_url <span style="color:#f92672">=</span> self<span style="color:#f92672">.</span>TO_PROCESS<span style="color:#f92672">.</span>get(block<span style="color:#f92672">=</span><span style="color:#66d9ef">True</span>, timeout<span style="color:#f92672">=</span><span style="color:#ae81ff">2</span>)
</span></span><span style="display:flex;"><span>            <span style="color:#75715e"># If we haven&#39;t already checked this link</span>
</span></span><span style="display:flex;"><span>            <span style="color:#66d9ef">if</span> target_url[<span style="color:#e6db74">&#34;url&#34;</span>] <span style="color:#f92672">not</span> <span style="color:#f92672">in</span> self<span style="color:#f92672">.</span>visited:
</span></span><span style="display:flex;"><span>                <span style="color:#75715e"># Mark it as visited</span>
</span></span><span style="display:flex;"><span>                self<span style="color:#f92672">.</span>visited<span style="color:#f92672">.</span>add(target_url[<span style="color:#e6db74">&#34;url&#34;</span>])
</span></span><span style="display:flex;"><span>                <span style="color:#75715e"># Submit the link to the pool</span>
</span></span><span style="display:flex;"><span>                job <span style="color:#f92672">=</span> self<span style="color:#f92672">.</span>pool<span style="color:#f92672">.</span>submit(self<span style="color:#f92672">.</span>load_url, target_url, self<span style="color:#f92672">.</span>TIMEOUT)
</span></span><span style="display:flex;"><span>                job<span style="color:#f92672">.</span>add_done_callback(self<span style="color:#f92672">.</span>handle_future)
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">except</span> Empty:
</span></span><span style="display:flex;"><span>            <span style="color:#66d9ef">return</span>
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">except</span> <span style="color:#a6e22e">Exception</span> <span style="color:#66d9ef">as</span> e:
</span></span><span style="display:flex;"><span>            print(e)
</span></span></code></pre></div><p>The implementation reflects several engineering leadership principles. The thread pool size (100 workers) was determined through profiling and testing rather than guesswork. The timeout mechanism prevents slow requests from blocking overall progress. The callback pattern enables efficient result processing without blocking the main execution thread.</p>
<h2 id="measuring-real-impact">Measuring Real Impact</h2>
<p>Performance optimization discussions often remain theoretical without concrete measurements. For Hydra, the improvement was dramatic. Here’s a comparison between the run times for checking my website with a prototype single-thread program and using Hydra:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-text" data-lang="text"><span style="display:flex;"><span>time python3 slow-link-check.py https://victoria.dev
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>real    17m34.084s
</span></span><span style="display:flex;"><span>user    11m40.761s
</span></span><span style="display:flex;"><span>sys     0m5.436s
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>time python3 hydra.py https://victoria.dev
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>real    0m15.729s
</span></span><span style="display:flex;"><span>user    0m11.071s
</span></span><span style="display:flex;"><span>sys     0m2.526s
</span></span></code></pre></div><p>The single-threaded implementation took over seventeen minutes. The multithreaded version completed in under sixteen seconds. That’s a performance improvement of more than 60x.</p>
<p>These aren’t marginal gains from micro-optimizations. They represent fundamental improvements in application efficiency that users immediately notice. While specific timings vary based on site size and network conditions, the order-of-magnitude improvement demonstrates the value of addressing actual bottlenecks systematically.</p>
<h2 id="leadership-lessons-in-performance-optimization">Leadership Lessons in Performance Optimization</h2>
<p>The Hydra project illustrates several principles that engineering leaders can apply across different technologies and applications.</p>
<p><strong>Focus on actual bottlenecks, not theoretical ones.</strong> Teams often optimize the wrong things because they focus on code that feels slow to write rather than code that’s actually slow to execute. Teaching teams to measure and identify real performance constraints prevents wasted optimization effort.</p>
<p><strong>Understand your tools’ limitations and strengths.</strong> Python’s GIL is a constraint, but it doesn’t preclude high-performance applications in the right contexts. Effective technical leadership involves understanding how to work within technological constraints rather than avoiding tools with limitations.</p>
<p><strong>Make optimization decisions based on requirements.</strong> Hydra needed to run quickly in CI/CD environments, which justified the development effort for custom multithreading. But this level of effort isn’t required in every Python application. Understanding your specific requirements helps allocate development efforts appropriately.</p>
<p><strong>Measure improvement, don’t assume it.</strong> Performance optimization can introduce complexity and maintenance overhead. Concrete measurements ensure that optimization efforts provide sufficient value to justify their costs.</p>
<h2 id="building-performance-conscious-teams">Building Performance-Conscious Teams</h2>
<p>The most effective engineering teams develop systematic approaches to performance rather than relying on intuition or premature optimization. This requires creating culture and processes that encourage measurement, analysis, and strategic optimization decisions.</p>
<p>This means teaching teams to profile applications before optimizing them, helping them understand the performance characteristics of their technology choices, and ensuring that optimization efforts align with actual user requirements rather than theoretical concerns.</p>
<p>Most importantly, it means recognizing that performance optimization is a technical leadership skill that involves strategic thinking about trade-offs, constraints, and business requirements—not just implementation knowledge.</p>
<h2 id="the-real-lesson">The Real Lesson</h2>
<p>Hydra’s performance gains from 17 minutes to 16 seconds teaches a lesson that applies far beyond Python: measure first, optimize second, and always focus on the constraint that’s actually limiting your system. Whether you’re debugging performance bottlenecks or organizational inefficiencies, the biggest wins come from addressing the right problem rather than optimizing the wrong one exceptionally well.</p>
<p>The next time your team debates whether to rewrite everything in Go for performance, remember Hydra&rsquo;s 60x improvement using standard Python libraries. Sometimes the most effective optimization is the one you can implement this week rather than the solution you&rsquo;ll build next quarter… or the quarter after that.</p>
 ]]></content:encoded><enclosure url="https://victoria.dev/posts/17-minutes-to-16-seconds-a-60x-performance-improvement-from-python/cover_hufc3f293689518f785bfb54ed57068ead_360041_640x0_resize_box_3.png" length="47917" type="image/jpg"/></item><item><title>From 17 Minutes to 8 Seconds: Strategic Performance Optimization for Engineering Teams</title><link>https://victoria.dev/posts/from-17-minutes-to-8-seconds-strategic-performance-optimization-for-engineering-teams/</link><pubDate>Tue, 25 Feb 2020 12:50:29 -0500</pubDate><author>hello@victoria.dev (Victoria Drake)</author><guid>https://victoria.dev/posts/from-17-minutes-to-8-seconds-strategic-performance-optimization-for-engineering-teams/</guid><description>How engineering leaders can drive organizational impact by identifying and breaking critical performance bottlenecks in CI/CD pipelines and development workflows.</description><content:encoded>
&lt;img src="https://victoria.dev/posts/from-17-minutes-to-8-seconds-strategic-performance-optimization-for-engineering-teams/cover_hucd0e6312363c308744eec049709d6b2b_396830_640x0_resize_box_3.png" width="640" height="256"/><![CDATA[ <p>Leading engineering teams means constantly balancing technical excellence with organizational needs. I found myself facing a perfect example of this challenge when helping out the Open Web Application Security Project (OWASP). When I joined the core team for OWASP&rsquo;s Web Security Testing Guide, I found a critical infrastructure problem that was silently undermining both our security mission and our ability to ship quality work efficiently.</p>
<p>OWASP is a big organization with an even bigger website to match. The site serves hundreds of thousands of visitors with cybersecurity resources that security professionals worldwide depend on. But beneath this successful exterior, we had a problem that most engineering leaders will recognize: broken processes that no one had time to fix, creating cascading inefficiencies across our entire development workflow.</p>
<p>OWASP.org lacked any centralized quality assurance processes and was riddled with broken links. Customers don’t like broken links; attackers really do. These weren&rsquo;t just user experience issues—they represented real security vulnerabilities that could enable attacks like broken link hijacking and subdomain takeovers. Here we were, an organization dedicated to web security, with our own infrastructure exposing the exact vulnerabilities we taught others to prevent.</p>
<h2 id="when-infrastructure-problems-become-leadership-problems">When Infrastructure Problems Become Leadership Problems</h2>
<p>The broken link problem at OWASP had all the hallmarks of technical debt that had become organizational debt: volunteers avoided updating content because they knew links might break, and quality suffered because manual checking was impractical. Our CI/CD pipeline had a glaring gap where automated link validation should have been.</p>
<p>The underlying issue was both technical and strategic. We needed a solution that could integrate into our development workflow, scale with our volunteer contributor model, and actually get adopted by teams who were already stretched thin. This meant thinking beyond just building a tool; I needed to design a solution that addressed the human and process challenges alongside the technical ones.</p>
<h2 id="strategic-requirements-beyond-just-make-it-work">Strategic Requirements Beyond Just &ldquo;Make It Work&rdquo;</h2>
<p>When I proposed building an automated link checking solution, the requirements went far beyond technical functionality. As engineering leaders, we know that tools succeed or fail based on adoption, maintainability, and organizational fit. Our solution needed to:</p>
<ul>
<li>Integrate seamlessly into existing CI/CD workflows without disrupting volunteer contributors</li>
<li>Provide actionable reports that non-technical content maintainers could understand and act on</li>
<li>Run efficiently enough to avoid becoming a bottleneck in our deployment process</li>
<li>Scale with OWASP&rsquo;s distributed, volunteer-driven development model</li>
</ul>
<p>The technical challenge was, essentially, to build a web crawler. The leadership challenge was ensuring it would actually solve our organizational problem rather than just creating another tool that sits unused.</p>
<p>This required making strategic decisions about language choice, architecture, and performance that balanced multiple constraints: team familiarity (Python was the common denominator), performance requirements (CI/CD integration demanded speed), and long-term maintainability (volunteers needed to be able to contribute to the codebase).</p>
<h2 id="understanding-the-real-cost-of-performance-bottlenecks">Understanding the Real Cost of Performance Bottlenecks</h2>
<p>As engineering leaders, we need to think about performance in terms of organizational impact, not just technical metrics. The latency numbers that every developer should know tell a story about where bottlenecks hide and how they compound:</p>
<table>
<thead>
<tr>
<th>Type</th>
<th>Task</th>
<th>Time</th>
</tr>
</thead>
<tbody>
<tr>
<td>CPU</td>
<td>execute typical instruction</td>
<td>1/1,000,000,000 sec = 1 nanosec</td>
</tr>
<tr>
<td>CPU</td>
<td>fetch from L1 cache memory</td>
<td>0.5 nanosec</td>
</tr>
<tr>
<td>CPU</td>
<td>branch misprediction</td>
<td>5 nanosec</td>
</tr>
<tr>
<td>CPU</td>
<td>fetch from L2 cache memory</td>
<td>7 nanosec</td>
</tr>
<tr>
<td>RAM</td>
<td>Mutex lock/unlock</td>
<td>25 nanosec</td>
</tr>
<tr>
<td>RAM</td>
<td>fetch from main memory</td>
<td>100 nanosec</td>
</tr>
<tr>
<td>RAM</td>
<td>read 1MB sequentially from memory</td>
<td>250,000 nanosec</td>
</tr>
<tr>
<td>Disk</td>
<td>fetch from new disk location (seek)</td>
<td>8,000,000 nanosec   (8ms)</td>
</tr>
<tr>
<td>Disk</td>
<td>read 1MB sequentially from disk</td>
<td>20,000,000 nanosec  (20ms)</td>
</tr>
<tr>
<td>Network</td>
<td>send packet US to Europe and back</td>
<td>150,000,000 nanosec (150ms)</td>
</tr>
</tbody>
</table>
<p>Peter Norvig first published these numbers some years ago in <a href="http://norvig.com/21-days.html#answers">Teach Yourself Programming in Ten Years</a>. While technology changes over the decades, the order-of-magnitude differences between these numbers remain as devastatingly accurate as ever.</p>
<p>These numbers reveal something critical for engineering leaders to know: network operations are over a million times slower than memory operations. In our link checker, every HTTP request was a network operation, meaning we were dealing with the slowest possible operation for a process that needed to run fast and efficiently in CI/CD.</p>
<p>A single-thread crawler workflow creates an inherent bottleneck:</p>
<ol>
<li>Fetch HTML from a page (network-bound operation)</li>
<li>Parse links from the HTML content</li>
<li>Validate each link by making HTTP requests (more network-bound operations)</li>
<li>Track visited links to avoid duplicate work</li>
<li>Repeat for every page found</li>
</ol>
<figure class="screenshot"><img src="/posts/from-17-minutes-to-8-seconds-strategic-performance-optimization-for-engineering-teams/execution_flow.png"
    alt="A flow chart showing program execution">
</figure>

<p>Mapping out the execution flow makes the issue clear to see: this process was fundamentally serial, with network latency dominating every step. For a site like OWASP.org with over 12,000 links, this meant potential runtime measured in hours, not minutes.</p>
<p>Bottlenecks like this cascade through entire organizations, affecting developer productivity, deployment confidence, and ultimately in the case of OWASP, our ability to deliver on our security mission effectively.</p>
<p>Checking these links serially would guarantee a performance bottleneck that would hurt team productivity, deployment confidence, and our ability to ship quality software consistently.</p>
<h2 id="how-bottlenecks-cascade-through-engineering-organizations">How Bottlenecks Cascade Through Engineering Organizations</h2>
<p>How long would it have taken to check all 12,000 links on OWASP.org with a single-thread web crawler? We can make a rough estimate:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-text" data-lang="text"><span style="display:flex;"><span>      150 milliseconds per network request
</span></span><span style="display:flex;"><span> x 12,000 links on OWASP.org
</span></span><span style="display:flex;"><span>---------
</span></span><span style="display:flex;"><span>1,800,000 milliseconds (30 minutes minimum)
</span></span></code></pre></div><p>A whole half hour, just for the network tasks. In the real world it would likely be much slower than that, since web pages are frequently much larger than one packet.</p>
<p>When your CI/CD pipeline includes a (very conservative minimum) 30-minute bottleneck, the impact extends far beyond technical metrics. Several things happen:</p>
<p>First, your feedback loops become painfully long. Contributors push changes and then wait more than half an hour to learn if they&rsquo;ve broken anything. This delays iteration, reduces deployment confidence, and ultimately makes your team more conservative about shipping improvements.</p>
<p>Second, to add insult to injury, the financial impact compounds quickly. In serverless environments like AWS Lambda, compute time directly translates to cost. A process that takes 30 minutes instead of seconds doesn&rsquo;t just waste time—it multiplies your infrastructure costs dramatically.</p>
<figure><img src="/posts/from-17-minutes-to-8-seconds-strategic-performance-optimization-for-engineering-teams/lambda-chart.png"
    alt="Chart showing Total Lambda compute cost by function execution"><figcaption>
      <p>Source: <a href="https://serverless.com/blog/understanding-and-controlling-aws-lambda-costs/">Understanding and Controlling AWS Lambda Costs</a></p>
    </figcaption>
</figure>

<p>But the hidden cost is team productivity. When your deployment pipeline has unpredictable bottlenecks, teams start working around them. They try to batch changes into huge PRs instead of making small incremental (and easier to merge) improvements. They skip running full test suites locally. They become hesitant to refactor or make structural improvements that might require multiple deployment cycles to validate.</p>
<p>Identifying and resolving bottlenecks can make the difference between teams that stall at fixing bugs and teams that ship new features fast.</p>
<h2 id="making-strategic-technology-decisions-under-constraints">Making Strategic Technology Decisions Under Constraints</h2>
<p>This is where engineering leadership gets interesting: balancing competing constraints while making decisions that your team can actually execute on. I had to choose between Python (a comfortable language choice for everyone in the OWASP group) and Go (which offered better concurrency primitives and performance characteristics).</p>
<p>The decision matrix looked like this:</p>
<ul>
<li><strong><strong>Team familiarity</strong></strong>: Python had broad adoption across OWASP contributors</li>
<li><strong><strong>Performance requirements</strong></strong>: Go&rsquo;s goroutines made concurrent programming more straightforward</li>
<li><strong><strong>Maintainability</strong></strong>: We needed something contributors could debug and extend</li>
<li><strong><strong>Long-term scalability</strong></strong>: The solution needed to handle growing content without constant optimization</li>
</ul>
<p>I chose to prototype the link checker in both languages. I built a multithreaded Python version that I dubbed <a href="https://github.com/victoriadrake/hydra-link-checker">Hydra</a>, and a Go version that took advantage of goroutines. This gave us concrete data to inform the decision rather than relying on assumptions. This approach—building multiple solutions to validate architectural choices—is something I&rsquo;ve found invaluable for critical infrastructure decisions.</p>
<h2 id="designing-solutions-that-scale-with-your-team">Designing Solutions That Scale With Your Team</h2>
<p>The good news is that once you identify a bottleneck, you can resolve it. Whether it&rsquo;s scaling work efficiently across your team, code reviews, incident response, or in our case, link validation, the principle is the same: address the slowest operation.</p>
<p>Think of our single-thread web crawler as if it were one person handling all the work sequentially. The work gets done, but one person doesn&rsquo;t scale well to thousands of requests. Working in serial, each request has to wait for the previous one to complete, creating an artificial constraint where we&rsquo;re limited by the slowest individual operation.</p>
<p>Thankfully, link validation is an embarrassingly parallel problem. Each link can be checked independently, which means we could distribute the work across multiple concurrent processes, like having several people split up the work to help it go faster. In computing this is called multithreading.</p>
<p>By designing for concurrency from the start and building a multithreaded link checker, we&rsquo;d have solution that could scale with different deployment environments, handle varying load patterns, and remain responsive even as OWASP&rsquo;s content grew.</p>
<p>To illustrate, here are some snippets from the Go implementation. They incorporate some architectural insights that are relevant for any engineering leader designing concurrent systems.</p>
<h3 id="1-safe-concurrent-access">1. Safe Concurrent Access</h3>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-go" data-lang="go"><span style="display:flex;"><span><span style="color:#66d9ef">type</span> <span style="color:#a6e22e">Checker</span> <span style="color:#66d9ef">struct</span> {
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">startDomain</span>             <span style="color:#f92672">**</span><span style="color:#66d9ef">string</span><span style="color:#f92672">**</span>
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">brokenLinks</span>             []<span style="color:#a6e22e">Result</span>
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">visitedLinks</span>            <span style="color:#66d9ef">map</span>[<span style="color:#f92672">**</span><span style="color:#66d9ef">string</span><span style="color:#f92672">**</span>]<span style="color:#f92672">**</span><span style="color:#66d9ef">bool</span><span style="color:#f92672">**</span>
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">workerCount</span>, <span style="color:#a6e22e">maxWorkers</span> <span style="color:#f92672">**</span><span style="color:#66d9ef">int</span><span style="color:#f92672">**</span>
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">sync</span>.<span style="color:#a6e22e">Mutex</span>
</span></span><span style="display:flex;"><span>}
</span></span></code></pre></div><p>The <code>sync.Mutex</code> ensures our shared state remains consistent across goroutines, while the <code>visitedLinks</code> map uses O(1) lookup time to avoid creating new bottlenecks as our dataset grows.</p>
<blockquote>
<p>When optimizing one constraint like network latency, make sure you&rsquo;re not inadvertently creating new constraints elsewhere—like O(n) lookup times that degrade performance as your data grows.</p>
</blockquote>
<h3 id="2-throttling">2. Throttling</h3>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-go" data-lang="go"><span style="display:flex;"><span><span style="color:#66d9ef">for</span> <span style="color:#a6e22e">i</span> <span style="color:#f92672">:=</span> <span style="color:#66d9ef">range</span> <span style="color:#a6e22e">toProcess</span> {
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">wg</span>.<span style="color:#a6e22e">Add</span>(<span style="color:#ae81ff">1</span>)
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">checker</span>.<span style="color:#a6e22e">addWorker</span>()
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">go</span> <span style="color:#a6e22e">worker</span>(<span style="color:#a6e22e">i</span>, <span style="color:#f92672">&amp;</span><span style="color:#a6e22e">checker</span>, <span style="color:#f92672">&amp;</span><span style="color:#a6e22e">wg</span>, <span style="color:#a6e22e">toProcess</span>)
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">if</span> <span style="color:#a6e22e">checker</span>.<span style="color:#a6e22e">workerCount</span> &gt; <span style="color:#a6e22e">checker</span>.<span style="color:#a6e22e">maxWorkers</span> {
</span></span><span style="display:flex;"><span>        <span style="color:#a6e22e">time</span>.<span style="color:#a6e22e">Sleep</span>(<span style="color:#ae81ff">1</span> <span style="color:#f92672">*</span> <span style="color:#a6e22e">time</span>.<span style="color:#a6e22e">Second</span>) <span style="color:#f92672">*</span><span style="color:#75715e">// throttle down*
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span>    }
</span></span><span style="display:flex;"><span>}
</span></span><span style="display:flex;"><span><span style="color:#a6e22e">wg</span>.<span style="color:#a6e22e">Wait</span>()
</span></span></code></pre></div><p>Even when you can parallelize work, you need to respect system boundaries. Too many concurrent HTTP requests could overwhelm target servers or trigger rate limiting, so we built in backpressure to ensure our optimization doesn&rsquo;t create problems for others. This is an effective way to balance between performance and being a good network citizen.</p>
<h2 id="measuring-impact-the-results-that-matter-for-engineering-teams">Measuring Impact: The Results That Matter for Engineering Teams</h2>
<p>To obtain some concrete data, I compared the numbers between three implementations: a prototype single-thread Python program, the multithreaded Hydra version, and an implementation written in Go. The performance data from our three implementations tells a story about strategic technology choices and their organizational impact. Here&rsquo;s a comparison run against my website with its few hundred links:</p>
<h3 id="single-threaded-python-prototype">Single-Threaded Python Prototype</h3>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-text" data-lang="text"><span style="display:flex;"><span>time python3 slow-link-check.py https://victoria.dev
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>real 17m34.084s
</span></span><span style="display:flex;"><span>user 11m40.761s
</span></span><span style="display:flex;"><span>sys     0m5.436s
</span></span></code></pre></div><p>Seventeen minutes for a site much smaller than OWASP.org meant our original approach would have been completely unusable in a CI/CD context.</p>
<h3 id="hydra-multithreaded-python-version">Hydra: Multithreaded Python Version</h3>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-text" data-lang="text"><span style="display:flex;"><span>time python3 hydra.py https://victoria.dev
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>real 1m13.358s
</span></span><span style="display:flex;"><span>user 0m13.161s
</span></span><span style="display:flex;"><span>sys     0m2.826s
</span></span></code></pre></div><p>The concurrency improvements brought us down to just over a minute—a 15x improvement that made CI/CD integration viable.</p>
<h3 id="go-implementation">Go Implementation</h3>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-text" data-lang="text"><span style="display:flex;"><span>time ./go-link-check --url=https://victoria.dev
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>real 0m7.926s
</span></span><span style="display:flex;"><span>user 0m9.044s
</span></span><span style="display:flex;"><span>sys     0m0.932s
</span></span></code></pre></div><p>Eight seconds. This performance improvement fundamentally changed how teams could interact with the tool. With this level of efficiency, link checking could become part of every deployment without friction. Contributors wouldn&rsquo;t think twice about running it locally. Instead of being a barrier, link checking would be invisible infrastructure.</p>
<p>As fun as it is to simply enjoy the speedups, we can directly relate these results to everything we’ve discussed so far. Consider taking a process that used to soak up seventeen minutes and turning it into an eight-second-affair instead. Not only will that give developers a much shorter and more efficient feedback loop, it gives teams the ability to develop faster while costing less. To drive the point home: a process that runs in seventeen-and-a-half minutes instead of eight seconds will also cost over a hundred and thirty times more to run.</p>
<p>These numbers represent more than technical metrics. They show how strategic performance optimization can transform a tool from something teams avoid to something they rely on.</p>
<h2 id="the-leadership-framework-turning-performance-wins-into-organizational-impact">The Leadership Framework: Turning Performance Wins Into Organizational Impact</h2>
<p>The 130x performance improvement we achieved demonstrates a leadership approach to identifying and breaking bottlenecks that affects entire engineering organizations.</p>
<p>When engineering leaders see a 17-minute process become an 8-second process, we should be asking: what other critical workflows are creating similar friction? Where else are teams working around inefficient processes instead of addressing them? How many small compounding delays are preventing our organization from shipping quality software consistently?</p>
<p>The OWASP link checker became a case study for our broader infrastructure strategy. We learned that volunteer contributors were more likely to maintain content quality when the feedback loop was immediate. We discovered that CI/CD performance directly influenced how teams approached incremental improvements versus risky big-batch changes. Most importantly, we proved that strategic performance optimization could transform organizational behavior.</p>
<p>Start with understanding the human and organizational impact, design solutions that respect team constraints, and measure success by adoption and workflow improvement. When you can turn a deployment blocker into invisible infrastructure, you&rsquo;re optimizing both code and organizational dynamics by removing friction that allows your entire team to focus on delivering value rather than fighting with tools.</p>
 ]]></content:encoded><enclosure url="https://victoria.dev/posts/from-17-minutes-to-8-seconds-strategic-performance-optimization-for-engineering-teams/cover_hucd0e6312363c308744eec049709d6b2b_396830_640x0_resize_box_3.png" length="21370" type="image/jpg"/></item><item><title>How Engineering Leaders Build Security Culture Through Architecture Decisions</title><link>https://victoria.dev/posts/how-engineering-leaders-build-security-culture-through-architecture-decisions/</link><pubDate>Mon, 30 Sep 2019 08:03:12 -0400</pubDate><author>hello@victoria.dev (Victoria Drake)</author><guid>https://victoria.dev/posts/how-engineering-leaders-build-security-culture-through-architecture-decisions/</guid><description>How engineering leaders can transform security from an afterthought into a strategic advantage through architectural decisions that build security-conscious teams.</description><content:encoded>
&lt;img src="https://victoria.dev/posts/how-engineering-leaders-build-security-culture-through-architecture-decisions/cover_hu73e92637aebb1682b96cf0460f9cc452_2063427_640x0_resize_box_3.png" width="640" height="443"/><![CDATA[ <p>Leading engineering teams means constantly balancing several goals: speed to market, feature development, technical debt, and security. When I&rsquo;ve seen teams struggle with security, it&rsquo;s rarely because they lack technical knowledge. The real challenge is creating an organizational culture where security decisions are prioritized, even under pressure.</p>
<p>The &ldquo;we can do security later&rdquo; mindset creates what I call security debt—technical decisions that make it exponentially harder to secure applications as they scale. Unlike other forms of technical debt, security debt compounds in immediately dangerous ways. A rushed architectural decision made to meet a deadline can become a persistent vulnerability that affects every feature built on top of it.</p>
<p>As engineering leaders, we have a unique opportunity to shape how our teams think about security. The architectural decisions we make and the frameworks we establish don&rsquo;t just affect our current codebase—they define the security culture that will carry our teams through future challenges.</p>
<blockquote>
<p>I&rsquo;ve found that the most effective approach isn&rsquo;t to mandate security practices after the fact, but to build security thinking into the fundamental architectural decisions that guide daily development work.</p>
</blockquote>
<p>When teams understand why certain architectural patterns prevent entire classes of vulnerabilities, they start making secure choices naturally.</p>
<h2 id="the-leadership-challenge-building-security-culture-through-architecture">The Leadership Challenge: Building Security Culture Through Architecture</h2>
<p>The difference between teams that build secure applications and those that struggle with security incidents comes down to how engineering leaders approach architectural decision-making. Security-conscious teams don&rsquo;t just follow security checklists—they&rsquo;ve internalized security principles that guide their architectural choices.</p>
<p>This cultural shift happens when engineering leaders consistently demonstrate how security considerations influence technical decisions. When your team sees you weighing security implications during architecture reviews, evaluating third-party libraries through a security lens, and making trade-offs that prioritize long-term security over short-term convenience, they learn to apply the same thinking to their own work.</p>
<p>The framework I&rsquo;ve developed focuses on three architectural principles that, when consistently applied, create a foundation for security-conscious engineering culture:</p>
<ol>
<li><strong>Strategic Separation</strong>: Designing systems that isolate different types of data and functionality</li>
<li><strong>Intentional Configuration</strong>: Making deliberate choices about system defaults and access patterns</li>
<li><strong>Controlled Access</strong>: Building authorization thinking into system design from the start</li>
</ol>
<p>These are are both technical guidelines and leadership tools for building teams that make decisions that promote built-in security by default.</p>
<h2 id="strategic-separation-teaching-teams-to-think-in-security-boundaries">Strategic Separation: Teaching Teams to Think in Security Boundaries</h2>
<p>The most effective security-conscious engineering teams I&rsquo;ve worked with share a common trait: they instinctively think in terms of security boundaries. This isn&rsquo;t something that happens overnight—it&rsquo;s a cultural shift that engineering leaders must deliberately cultivate through architectural decisions and team education.</p>
<p>When I talk about strategic separation, I mean designing systems that isolate different types of data and functionality based on their security requirements and organizational impact. The goal isn&rsquo;t just to prevent specific vulnerabilities, but to create architectural patterns that make it obvious to your team when they&rsquo;re crossing security boundaries.</p>
<p>Consider a common scenario that exposes how teams think about security:</p>
<p>Your team is building a user profile feature that includes photo uploads. The natural instinct is to store user photos alongside other application assets. After all, they&rsquo;re both images. But this decision reveals whether your team thinks in terms of security boundaries.</p>
<p>A security-conscious team immediately recognizes that user-uploaded content and application assets have fundamentally different security requirements. Application assets are controlled, vetted, and part of your deployment process. User uploads are untrusted input that could contain malicious content or exploit path traversal vulnerabilities to access sensitive configuration files.</p>
<p>The architectural decision here can prevent path traversal attacks, but it also establishes a pattern that helps your team understand the security implications of data boundary decisions.</p>
<blockquote>
<p>When you consistently demonstrate that different types of data require different security approaches, your team starts applying this thinking to database design, API endpoints, and service architecture.</p>
</blockquote>
<p>This is where engineering leadership becomes crucial. The technical solution is straightforward: separate user-uploaded content from application assets using different storage systems, domains, or security contexts. But the leadership challenge is helping your team understand why this separation matters and how to apply the same thinking to future architectural decisions.</p>
<p>I&rsquo;ve found that the most effective approach is to make security boundaries visible in your architecture discussions. When reviewing designs, ask questions like: &ldquo;What happens if this data is compromised or malicious?” and &ldquo;How would we contain an attack that starts here?&rdquo; These questions help teams internalize security thinking rather than just following rules.</p>
<p>The goal is creating teams that instinctively separate concerns based on security requirements, not just functional requirements. When your team starts proposing separated architectures without being prompted, you know the culture shift is working.</p>
<h2 id="intentional-configuration-building-security-conscious-deployment-culture">Intentional Configuration: Building Security-Conscious Deployment Culture</h2>
<p>Security misconfiguration represents one of the most persistent challenges in engineering leadership because it reveals gaps in team processes and organizational culture. The problem isn&rsquo;t that engineers don&rsquo;t understand security—it&rsquo;s that deployment processes often prioritize speed over security verification.</p>
<p>I&rsquo;ve seen engineering teams that had excellent security knowledge but still suffered issues because their deployment culture didn&rsquo;t include security configuration validation. The issue compounds when teams are under pressure to ship features quickly, making it easy to rationalize skipping security configuration reviews.</p>
<p>The solution isn&rsquo;t just better checklists or automated scanners, though those help. The real challenge is building a deployment culture where security configuration becomes as automatic as running tests. This requires engineering leaders to demonstrate that security configuration is a fundamental part of professional software deployment, not an optional extra.</p>
<p>When I work with teams on configuration security, I focus on three organizational patterns that prevent security misconfiguration:</p>
<ol>
<li><strong>Configuration as Code</strong>: Teams that treat configuration with the same rigor as application code naturally apply security thinking to deployment settings. When configuration changes require code review, security considerations become part of the discussion.</li>
<li><strong>Default-Secure Patterns</strong>: Rather than relying on engineers to remember security settings, establish organizational patterns where secure configuration is the default. This might mean custom deployment templates, infrastructure as code patterns, or automated validation that catches insecure defaults.</li>
<li><strong>Security Configuration Reviews</strong>: Just as you wouldn&rsquo;t deploy code without reviewing it, security configuration should be part of your regular architecture review process. This creates opportunities for knowledge sharing and continuous improvement.</li>
</ol>
<p>Security configuration problems are usually process problems disguised as technical problems.</p>
<blockquote>
<p>When teams consistently deploy with insecure configurations, it&rsquo;s often because their deployment process doesn&rsquo;t include security validation points, not because they lack security knowledge.</p>
</blockquote>
<p>Engineering leaders can transform this by making security configuration visible in deployment workflows. When your team sees you reviewing security settings during deployment reviews, asking questions about default configurations, and prioritizing security hardening alongside feature development, they learn to apply the same standards to their own work.</p>
<p>The goal is creating teams that instinctively question defaults and validate security configurations, not just when prompted by checklists, but because secure configuration has become part of their professional identity as engineers.</p>
<h2 id="controlled-access-designing-authorization-into-team-thinking">Controlled Access: Designing Authorization into Team Thinking</h2>
<p>Access control failures represent a particularly insidious class of security vulnerabilities because they&rsquo;re often invisible until it&rsquo;s too late. Unlike other security issues that can be caught by automated tools, access control problems require human understanding of business logic and user relationships. This makes them a perfect example of how security culture directly impacts security outcomes.</p>
<p>The challenge for engineering leaders is that beyond being a technical problem, access control is a design thinking problem. Teams that build secure access controls don&rsquo;t just implement authorization checks; they also think systematically about user relationships, privilege boundaries, and failure modes during the design phase.</p>
<blockquote>
<p>I&rsquo;ve observed that teams struggling with access control issues often share a common pattern: they build features first and add authorization as an afterthought.</p>
</blockquote>
<p>This approach creates security debt that compounds over time, making it harder to reason about who should have access to what functionality.</p>
<p>The solution requires a shift in how teams approach feature development. Instead of thinking about authorization as something you add to features, security-conscious teams think about authorization as a fundamental constraint that shapes feature design.</p>
<p>Consider the difference between these two approaches when building an admin moderation feature:</p>
<p><strong>Traditional Approach</strong>: Build the moderation interface, then add permission checks to prevent unauthorized access.</p>
<p><strong>Security-First Approach</strong>: Design the moderation feature as a completely separate system with its own authentication context, making unauthorized access architecturally impossible.</p>
<p>The second approach requires more upfront planning, but it creates systems that are secure by design rather than secure by careful (and slower, and more costly) implementation. More importantly, it teaches teams to think about authorization as a design constraint, not just a technical requirement.</p>
<p>This shift in thinking has organizational implications beyond just security. When teams consistently design features with authorization constraints in mind, they develop better intuition about user workflows, system boundaries, and API design. The security thinking improves overall system design.</p>
<blockquote>
<p>The goal is creating teams that instinctively design authorization into features rather than retrofitting it after implementation.</p>
</blockquote>
<p>As engineering leaders, we can foster this thinking by making authorization design visible in architecture discussions. When reviewing feature proposals, ask questions like: &ldquo;Who needs access to this?&rdquo; and &ldquo;How would we prevent privilege escalation?&rdquo; These questions help teams internalize authorization thinking as part of their design process.</p>
<h2 id="from-architecture-to-culture-the-leadership-impact">From Architecture to Culture: The Leadership Impact</h2>
<p>The three architectural principles I&rsquo;ve outlined—strategic separation, intentional configuration, and controlled access—represent more than just technical best practices. They&rsquo;re tools for building engineering cultures that make security decisions instinctively rather than reactively.</p>
<p>The transformation happens when engineering leaders consistently demonstrate that security considerations are integral to professional software development. When your team sees you making architectural decisions that prioritize security boundaries, questioning configuration defaults, and designing authorization into features from the start, they learn to apply the same thinking to their own work.</p>
<blockquote>
<p>This cultural shift has organizational benefits that extend far beyond security. Teams that think systematically about security boundaries also design better APIs, create more maintainable systems, and build more robust software overall. This security thinking improves engineering decision-making across the board.</p>
</blockquote>
<p>Security thinking isn&rsquo;t something you can successfully mandate through policies or checklists, though I’ve seen many organizations try. It emerges from the accumulated architectural decisions your team makes and the frameworks they internalize for thinking about system design. When security considerations become part of how your team naturally approaches technical problems, you&rsquo;ve created something much more valuable than just secure applications—you&rsquo;ve built a team that can adapt to new security challenges as they (constantly) emerge.</p>
<p>As engineering leaders, our role isn&rsquo;t just to ensure our current systems are secure, but to build teams that will continue making security-conscious decisions as they face new challenges, technologies, and organizational pressures. The architectural principles we establish today define the security culture that will guide our teams through future unknowns.</p>
 ]]></content:encoded><enclosure url="https://victoria.dev/posts/how-engineering-leaders-build-security-culture-through-architecture-decisions/cover_hu73e92637aebb1682b96cf0460f9cc452_2063427_640x0_resize_box_3.png" length="255364" type="image/jpg"/></item><item><title>Building Data Protection Culture: Why Engineering Leaders Must Address the Human Side of Security</title><link>https://victoria.dev/posts/building-data-protection-culture-why-engineering-leaders-must-address-the-human-side-of-security/</link><pubDate>Mon, 09 Sep 2019 09:10:11 -0400</pubDate><author>hello@victoria.dev (Victoria Drake)</author><guid>https://victoria.dev/posts/building-data-protection-culture-why-engineering-leaders-must-address-the-human-side-of-security/</guid><description>Prevent data breaches by building security-conscious teams. Engineering leaders' guide to creating culture where secure data handling becomes second nature.</description><content:encoded>
&lt;img src="https://victoria.dev/posts/building-data-protection-culture-why-engineering-leaders-must-address-the-human-side-of-security/cover_hu3d85b89eb9d0f9ebca0ddd94c26060d4_587474_640x0_resize_box_3.png" width="640" height="454"/><![CDATA[ <p>The most frustrating security incidents I&rsquo;ve dealt with as an engineering leader weren&rsquo;t caused by sophisticated attacks or zero-day vulnerabilities. They were caused by well-intentioned team members who accidentally exposed sensitive data through everyday tools and processes. A developer pasting API keys into a public Slack channel. A support engineer sharing database credentials through an unsecured text-sharing service. A product manager including real customer data in a publicly accessible report.</p>
<p>These incidents reveal a fundamental truth about data protection: it&rsquo;s not primarily a technical problem—it&rsquo;s an organizational one. The security of our applications depends as much on how our teams handle sensitive data in their daily workflows as it does on our encryption algorithms or access controls.</p>
<p>The challenge for engineering leaders is that traditional security approaches focus on technical controls while overlooking the human systems that determine how data actually flows through our organizations. When we treat data protection as purely a technical problem, we create a gap between our security policies and the reality of how our teams work.</p>
<blockquote>
<p>Teams with the strongest data protection practices don&rsquo;t rely on security tools. They have organizational cultures that make secure data handling feel natural and obvious.</p>
</blockquote>
<p>Building this kind of culture requires engineering leaders to think beyond technical solutions and address the underlying organizational patterns that lead to data exposure. The goal isn&rsquo;t just to prevent specific incidents, but to create teams that instinctively handle sensitive data securely, even under pressure.</p>
<h2 id="the-reality-of-data-exposure-its-happening-right-now">The Reality of Data Exposure: It&rsquo;s Happening Right Now</h2>
<p>Before diving into solutions, it&rsquo;s worth understanding just how pervasive data exposure has become. The reality is that sensitive data from organizations of all sizes is readily discoverable through simple search techniques. A quick search for <code>site:pastebin.com &quot;api_key&quot;</code> or <code>site:github.com &quot;password&quot;</code> reveals thousands of exposed credentials, database connections, and API keys from companies around the world.</p>
<p>This isn&rsquo;t theoretical—it&rsquo;s happening to teams just like yours, right now. The <a href="https://www.exploit-db.com/google-hacking-database">Google Hacking Database</a> catalogs thousands of search queries that can expose sensitive data, and security researchers regularly discover new leaked credentials on platforms like Pastebin, GitHub, and even internal Slack channels that have been accidentally made public.</p>
<figure class="screenshot"><img src="/posts/building-data-protection-culture-why-engineering-leaders-must-address-the-human-side-of-security/pastebin_apikey.png"
    alt="A screenshot of exposed api key in Google search"><figcaption>
      <p>API keys exposed through public paste services—a common data exposure pattern.</p>
    </figcaption>
</figure>

<p>The scale of this problem reveals something important: data exposure isn&rsquo;t just a technical failure, it&rsquo;s a systematic organizational problem. When thousands of developers across hundreds of companies make the same types of mistakes, it suggests that our industry-wide approach to data protection is fundamentally flawed.</p>
<h2 id="the-leadership-challenge-beyond-technical-solutions">The Leadership Challenge: Beyond Technical Solutions</h2>
<p>Most engineering leaders approach data protection through technical controls: better encryption, more restrictive access policies, automated scanning tools. These controls are important, but they don&rsquo;t address the root cause of most data exposure incidents.</p>
<p>The real challenge is organizational. When team members expose sensitive data, it&rsquo;s usually because they&rsquo;re working around limitations in their tools or processes. They&rsquo;re not necessarily being careless—they&rsquo;re trying to get their work done efficiently within the constraints of their environment.</p>
<p>Consider these common scenarios:</p>
<p><strong>The Developer&rsquo;s Dilemma</strong>: A developer needs to share a database connection string with a colleague to debug a production issue. The &ldquo;secure&rdquo; process involves filing a ticket, waiting for approval, and scheduling a meeting. The expedient process involves pasting it into Slack. Under deadline pressure, which do you think happens more often?</p>
<p><strong>The Support Engineer&rsquo;s Challenge</strong>: A support engineer needs to share customer data with the product team to investigate a bug. The secure process requires anonymizing the data, which takes time they don&rsquo;t have. The expedient process involves copying real data into a shared document.</p>
<p><strong>The Product Manager&rsquo;s Bind</strong>: A product manager needs to create test data for a demo. The secure process involves generating fake data that matches production patterns. The expedient process involves copying a subset of real customer data.</p>
<p>In each case, the data exposure isn&rsquo;t caused by malicious intent or even carelessness—it&rsquo;s caused by organizational friction between security requirements and operational reality.</p>
<blockquote>
<p>The most effective approach to data protection isn&rsquo;t to increase security friction, but to reduce the friction around secure practices while making insecure practices more obviously problematic.</p>
</blockquote>
<p>This is where engineering leadership becomes crucial. Technical solutions alone can&rsquo;t bridge the gap between security policies and operational needs. That requires organizational changes that make secure data handling the easiest and most obvious way to work.</p>
<h2 id="building-organizational-capabilities-for-data-protection">Building Organizational Capabilities for Data Protection</h2>
<p>The teams I&rsquo;ve worked with that have the strongest data protection practices share three organizational capabilities:</p>
<h3 id="1-secure-by-default-tooling">1. Secure-by-Default Tooling</h3>
<p>Instead of relying on people to remember security practices, these teams build security into their daily tools. This might mean:</p>
<ul>
<li><strong><strong>Internal paste services</strong></strong> that automatically expire and require authentication instead of relying on external tools</li>
<li><strong><strong>Automated credential management</strong></strong> through tools like AWS Secrets Manager or HashiCorp Vault that make secure credential sharing easier than insecure sharing</li>
<li><strong><strong>Data anonymization tools</strong></strong> that make it trivial to generate realistic test data without using production data</li>
</ul>
<p>People will use the most convenient option available. If secure practices are more convenient than insecure practices, people will choose secure practices naturally.</p>
<h3 id="2-visible-security-boundaries">2. Visible Security Boundaries</h3>
<p>Teams with strong data protection practices make security boundaries obvious in their workflows. This includes:</p>
<ul>
<li><strong><strong>Clear data classification</strong></strong> that helps team members understand what constitutes sensitive data</li>
<li><strong><strong>Workflow integration</strong></strong> that flags when someone is about to share sensitive data through insecure channels</li>
<li><strong><strong>Regular security boundary discussions</strong></strong> during architectural reviews and design meetings</li>
</ul>
<p>When security boundaries are visible and well-understood, team members can make informed decisions about data handling without needing to become security experts.</p>
<h3 id="3-treating-security-mistakes-as-learning-opportunities">3. Treating Security Mistakes as Learning Opportunities</h3>
<p>Perhaps most importantly, these teams create environments where people feel encouraged to report security mistakes or near-misses. This cultural element is often overlooked, but it&rsquo;s crucial for continuous improvement.</p>
<p>Teams that punish security mistakes create incentives for people to hide problems rather than fix them. Teams that treat security mistakes as learning opportunities create incentives for people to surface issues early and help improve processes.</p>
<h2 id="the-engineering-leaders-role-creating-sustainable-change">The Engineering Leader&rsquo;s Role: Creating Sustainable Change</h2>
<p>Building these organizational capabilities requires engineering leaders to approach data protection as a change management challenge, not just a technical one. This means:</p>
<ol>
<li><strong>Making the Case for Investment</strong>: Security tooling and process improvements often require upfront investment that may not have immediate visible returns. Engineering leaders need to advocate for this investment by helping stakeholders understand the organizational costs of data exposure incidents.</li>
<li><strong>Modeling Secure Behavior</strong>: When engineering leaders consistently demonstrate secure data handling practices in their own work, it signals to the team that these practices are valued and expected.</li>
<li><strong>Addressing Process Gaps</strong>: When team members work around security processes, it&rsquo;s often because those processes don&rsquo;t meet their operational needs. Engineering leaders need to identify and address these gaps rather than simply enforcing compliance.</li>
<li><strong>Celebrating Security Improvements</strong>: Teams that recognize and celebrate security improvements create cultures where security work is valued rather than seen as overhead.</li>
</ol>
<p>The goal isn&rsquo;t to eliminate all possibility of data exposure—that&rsquo;s impractical.</p>
<blockquote>
<p>The goal is to build organizational capabilities that make data exposure increasingly unlikely and ensure that when incidents do occur, they&rsquo;re caught and resolved quickly.</p>
</blockquote>
<h2 id="from-reactive-to-proactive-building-long-term-security-culture">From Reactive to Proactive: Building Long-Term Security Culture</h2>
<p>The most successful engineering leaders I&rsquo;ve worked with approach data protection as an ongoing organizational capability rather than a one-time project. This means:</p>
<ol>
<li><strong>Regular Security Culture Assessment</strong>: Periodically evaluating whether your team&rsquo;s tools and processes support secure data handling or create friction that encourages workarounds.</li>
<li><strong>Continuous Tool Investment</strong>: Investing in tools and processes that make secure data handling easier and more convenient than insecure alternatives.</li>
<li><strong>Cross-Functional Security Discussions</strong>: Including security considerations in product planning, design reviews, and operational discussions rather than treating security as a separate concern.</li>
<li><strong>Security Skill Development</strong>: Helping team members develop the knowledge and judgment needed to make good security decisions in novel situations.</li>
</ol>
<p>The teams that excel at data protection have built organizational cultures where security is valued as part of the development lifecycle. This cultural shift doesn&rsquo;t happen overnight, but it creates sustainable security practices that adapt to new challenges and technologies.</p>
<p>As engineering leaders, our role is to build teams that will continue making secure choices as they face new tools, processes, and organizational pressures. The organizational capabilities we build today define how our teams will handle tomorrow&rsquo;s security challenges.</p>
<p>When data protection becomes embedded in your team&rsquo;s culture rather than imposed through policies, you&rsquo;ve created something much more valuable than just better security—you&rsquo;ve built a team that can adapt to new security challenges while maintaining the operational efficiency needed to build great products.</p>
 ]]></content:encoded><enclosure url="https://victoria.dev/posts/building-data-protection-culture-why-engineering-leaders-must-address-the-human-side-of-security/cover_hu3d85b89eb9d0f9ebca0ddd94c26060d4_587474_640x0_resize_box_3.png" length="144703" type="image/jpg"/></item><item><title>On Doing Great Things</title><link>https://victoria.dev/posts/on-doing-great-things/</link><pubDate>Fri, 08 Mar 2019 18:36:15 -0500</pubDate><author>hello@victoria.dev (Victoria Drake)</author><guid>https://victoria.dev/posts/on-doing-great-things/</guid><description>Lessons from Grace Hopper on doing meaningful work in tech. Why focusing on contribution over recognition creates lasting impact in engineering leadership.</description><content:encoded>
&lt;img src="https://victoria.dev/posts/on-doing-great-things/grace-murray-hopper-park_hu2bbff2aa9b6171d9361d3e244239fc60_166728_640x0_resize_q75_box.jpeg" width="640" height="480"/><![CDATA[ <p>It&rsquo;s International Women&rsquo;s Day, and I&rsquo;m thinking about Grace Hopper.</p>
<p><a href="https://en.m.wikipedia.org/wiki/Grace_Hopper">Grace Hopper</a> was an amazing lady who did great things. She envisioned and helped create programming languages that translate English terms into machine code. She persevered in her intention to join the US Navy from the time she was rejected at 34 years old, to being sworn in to the US Navy Reserve three years later, to retiring with the rank of commander at age 60&hellip; then was recalled (twice) and promoted to the rank of captain at the age of 67. She advocated for distributed networks and developed computer testing standards we use today, among other achievements too numerous to list here.</p>
<p>By my read, throughout her life, she kept her focus on her work. She did great things because she could do them, and felt some duty to do them. Her work speaks for itself.</p>
<p>I recently came across a sizeable rock denoting a rather small, quiet park. It looks like this:</p>
<p><img src="grace-murray-hopper-park.jpeg#center" alt="Signage on a rock denoting Grace Murray Hopper Park"></p>
<p>When I first saw this park, I thought it in no way did this great lady justice. But upon some reflection, its lack of assumption and grandeur grew on me. And today, it drew to the forefront something that&rsquo;s been on my mind.</p>
<p>I try and contribute regularly to the wide world of technology, usually through building things, writing, and mentorship. I sometimes get asked to participate in female-focused tech events. I hear things like, &ldquo;too few developers are women,&rdquo; or &ldquo;we need more women in blockchain,&rdquo; or &ldquo;we need more female coders.&rdquo;</p>
<p>For some time I haven&rsquo;t been sure how to respond, because while my answer isn&rsquo;t &ldquo;yes,&rdquo; it&rsquo;s not exactly &ldquo;no,&rdquo; either. It&rsquo;s really, &ldquo;no, because&hellip;&rdquo; and it&rsquo;s because I&rsquo;m afraid. I&rsquo;m afraid of misrepresenting myself, my values, and my goals.</p>
<p>Discrimination and racism are real things. They exist in the minds and attitudes of a very small percentage of very loud people, as they always will. These people aren&rsquo;t, however, the majority. They are small.</p>
<p>I think that on the infrequent occasions when we encounter these people, we should do our best to lead by example. We should have open minds, tell our stories, listen to theirs. Try and learn something. That&rsquo;s all.</p>
<p>When I present myself, I don&rsquo;t point out that I&rsquo;m a woman. I don&rsquo;t align myself with &ldquo;women in tech&rdquo; or seek to represent them. I don&rsquo;t go to women-only meetings or support organizations that discriminate against men, or anyone at all. It&rsquo;s not because I&rsquo;m insecure as a woman, or ashamed that I&rsquo;m a woman, or some other inflammatory adjective that lately shows up in conjunction with being female. It&rsquo;s because I&rsquo;ve no reason to point out my gender, any more than needing to point out that my hair is black, or that I&rsquo;m short. It&rsquo;s obvious and simultaneously irrelevant.</p>
<p>When I identify with a group, I talk about the go-getters who wake up at 0500 every day and go work out—no matter the weather, or whether they feel like it. I tell stories about the people I&rsquo;ve met in different countries around the world, who left home, struck out on their own, and had an adventure, because they saw value in the experience. I identify with people who constantly build things, try things, design and make things, and then share those things with the world, because they love to do so. This is how I see myself. This is what matters to me.</p>
<p>Like the unassuming park named after an amazing woman, when truly great things are done, they are done relatively quietly. Not done for the fanfare of announcing them to the world, but for the love of the thing itself. So go do great things, please. The world still needs them.</p>
 ]]></content:encoded><enclosure url="https://victoria.dev/posts/on-doing-great-things/grace-murray-hopper-park_hu2bbff2aa9b6171d9361d3e244239fc60_166728_640x0_resize_q75_box.jpeg" length="81873" type="image/jpg"/></item><item><title>Building Code Quality Culture Through Commit Standards</title><link>https://victoria.dev/posts/building-code-quality-culture-through-commit-standards/</link><pubDate>Mon, 06 Aug 2018 08:54:56 -0400</pubDate><author>hello@victoria.dev (Victoria Drake)</author><guid>https://victoria.dev/posts/building-code-quality-culture-through-commit-standards/</guid><description>Transform your team's engineering culture with structured Git commit standards. Learn how clear commit messages drive code quality and team collaboration.</description><content:encoded>
&lt;img src="https://victoria.dev/posts/building-code-quality-culture-through-commit-standards/cover_git-commit-art_hu3c5d7a1ac69b5f3a44f90a6688078cc1_89530_640x0_resize_box_3.png" width="640" height="320"/><![CDATA[ <p>When I first started leading engineering teams, I thought high quality code was about efficient algorithms and architecture. I was wrong. The biggest indicator of a team&rsquo;s engineering maturity shows up in their commit history.</p>
<p>A clean commit history reveals a team that thinks about maintainability, communicates context effectively, and takes pride in their craft. Messy commits signal the opposite: rushed work, poor communication habits, and a culture that prioritizes shipping over sustainability (guaranteed to make the <a href="/posts/the-descent-is-harder-than-the-climb/">descent harder than the climb</a>). As an engineering leader, establishing commit standards builds the foundation for everything else you want to achieve.</p>
<h2 id="the-cost-of-poor-commit-culture">The Cost of Poor Commit Culture</h2>
<p>I’ve seen this common pattern especially often in the post-startup phase. Issues that should have been a 30-minute investigation stretch into hours due to useless commit messages like &ldquo;fix stuff,&rdquo; &ldquo;updates,&rdquo; or &ldquo;refactor”—messages that make it impossible to understand the intent behind each change. One of the least thoughtful comments I’ve ever heard on the subject went along the lines of, “Capable engineers should just be able to read the code and understand the change, we don’t need good commit messages.” Right. Have fun explaining to the director that a straightforward bug fix requires days of reading lines of code because the team allows lazy commits.</p>
<p>It’s arguable that the real cost isn’t even the lost revenue—it’s the erosion of trust. Teams with lazy commits start questioning each other&rsquo;s work quality, code reviews became adversarial, and velocity plummets as a result.</p>
<p>Useful commit standards create a culture with:</p>
<ul>
<li><strong>Context preservation</strong> - Future team members (including your future self) can understand not just what changed, but why</li>
<li><strong>Accountability</strong> - Engineers take ownership of their changes and think through the impact</li>
<li><strong>Knowledge transfer</strong> - Institutional knowledge doesn&rsquo;t walk out the door when someone leaves</li>
<li><strong>Debugging efficiency</strong> - When things break, you can quickly trace the source and reasoning</li>
</ul>
<p>Poor commit habits compound over time. What starts as a small productivity tax becomes a massive technical debt that slows down everything your team tries to accomplish.</p>
<h2 id="making-standards-stick-the-template-approach">Making Standards Stick: The Template Approach</h2>
<p>The biggest challenge with commit standards isn&rsquo;t defining them—it&rsquo;s getting your team to actually follow them consistently. I&rsquo;ve seen too many teams create detailed commit guidelines that gather dust in a README file while engineers continue writing &ldquo;fixed stuff&rdquo; messages.</p>
<p>The solution is to make good practices easier than bad ones. Instead of expecting people to remember complex guidelines under deadline pressure, embed the standards directly into the workflow.</p>
<p>Here&rsquo;s the team commit template I&rsquo;ve successfully rolled out across multiple organizations:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-text" data-lang="text"><span style="display:flex;"><span>## If applied, this commit will...
</span></span><span style="display:flex;"><span>## [Add/Fix/Remove/Update/Refactor/Document] [issue #id] [summary]
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>## Why is it necessary? (Bug fix, feature, improvements?)
</span></span><span style="display:flex;"><span>-
</span></span><span style="display:flex;"><span>## How does the change address the issue?
</span></span><span style="display:flex;"><span>-
</span></span><span style="display:flex;"><span>## What side effects does this change have?
</span></span><span style="display:flex;"><span>-
</span></span></code></pre></div><p>To implement this across your team, add it to your onboarding process:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-sh" data-lang="sh"><span style="display:flex;"><span>git config --global commit.template ~/.gitmessage
</span></span></code></pre></div><p>The template serves multiple purposes beyond just formatting. It forces engineers to think through the &ldquo;why&rdquo; behind their changes, which often reveals edge cases or better approaches before the code ever reaches review. I&rsquo;ve watched junior engineers discover design flaws simply by trying to articulate their commit message.</p>
<p>More importantly, it creates consistency without feeling like micromanagement. Engineers appreciate having a framework with a short feedback loop rather than being told their commit messages are &ldquo;wrong&rdquo; after the fact.</p>
<h3 id="connecting-work-to-business-impact">Connecting Work to Business Impact</h3>
<p>One pattern I&rsquo;ve noticed across high-performing teams is how they connect individual commits to larger business objectives. Beyond linking to issue numbers, this creates traceability from business requirements to implementation details.</p>
<p>When commits reference issues consistently, several things happen:</p>
<ul>
<li><strong>Product managers can track feature progress</strong> without constantly asking for updates</li>
<li><strong>Support teams can quickly identify which changes might relate to customer issues</strong></li>
<li><strong>Security audits become straightforward</strong> when you need to trace the history of sensitive code</li>
<li><strong>Technical debt discussions become data-driven</strong> when you can quantify how much time is spent on maintenance vs. features</li>
</ul>
<p>Teams can use this traceability to make compelling cases for technical investments. When every bug fix commit links back to customer-reported issues, the cost of poor code quality becomes visible to leadership in a way that resonates.</p>
<h3 id="removing-friction-through-tooling">Removing Friction Through Tooling</h3>
<p>The most effective way to improve team habits is to make the desired behavior the easiest behavior. Beyond templates, consider how your development environment can reinforce good practices.</p>
<p>For teams using VS Code, I recommend setting up workspace configurations that include spell check and line wrapping for commit messages. This prevents the common problem of commit messages that are impossible to read in terminal displays.</p>
<p>More importantly, consider integrating commit quality into your CI/CD pipeline. Tools like <code>commitlint</code> can automatically validate commit message format, while pre-commit hooks can catch obvious issues before they reach the remote repository.</p>
<p>The goal is to provide immediate feedback when standards aren&rsquo;t met, rather than discovering problems during code review when fixing them is more disruptive to workflow.</p>
<h2 id="teaching-atomic-commits-through-code-review">Teaching Atomic Commits Through Code Review</h2>
<p>One of the most valuable lessons I learned as an engineering manager is that teaching atomic commits—one logical change per commit—dramatically improves code review quality and team collaboration.</p>
<p>When engineers make atomic commits, several things happen naturally:</p>
<ul>
<li><strong>Code reviews become faster</strong> because each commit tells a clear story</li>
<li><strong>Debugging becomes surgical</strong> because you can isolate exactly which logical change introduced a problem</li>
<li><strong>Feature rollbacks become safe</strong> when you can revert a specific piece of functionality without touching unrelated code</li>
<li><strong>Knowledge transfer improves</strong> because the commit history becomes a tutorial of how the system evolved</li>
</ul>
<p>The challenge is that atomic commits require more upfront thinking. Engineers need to plan their approach before writing code, which feels slower initially but pays massive dividends in team velocity over time.</p>
<p>I&rsquo;ve found the most effective way to teach this is through code review feedback that focuses on commit structure, not just code quality. When I see a pull request with one massive commit containing three different features, I ask the engineer to break it down and explain the reasoning for each piece.</p>
<h3 id="setting-team-expectations-for-commit-cleanup">Setting Team Expectations for Commit Cleanup</h3>
<p>Here&rsquo;s where leadership philosophy matters more than technical mechanics. Some teams insist on pristine linear history, while others prefer to preserve the full context of how work actually happened, including false starts and iterations.</p>
<p>I&rsquo;ve found the most success with a middle path: require clean, atomic commits for the main branch, but allow messy work-in-progress commits on feature branches. This gives engineers the freedom to commit frequently while working (which improves backup and collaboration) while ensuring the permanent history tells a clear story.</p>
<p>The key is establishing this expectation early and consistently. During code reviews, I focus on commit structure as much as code quality. A well-structured commit history often indicates clear thinking about the problem space.</p>
<p>For teams new to this practice, I recommend starting with simple squash merges:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-sh" data-lang="sh"><span style="display:flex;"><span>git reset --soft origin/master
</span></span><span style="display:flex;"><span>git commit
</span></span></code></pre></div><p>This approach takes multiple messy commits and combines them into one clean commit before merging to main. It&rsquo;s forgiving for engineers still learning atomic commit habits while maintaining clean project history.</p>
<h3 id="building-confidence-through-practice">Building Confidence Through Practice</h3>
<p>One concern I often hear from engineering managers is that requiring clean commits will slow down their team. In my experience, the opposite is true—but only after an initial learning period where engineers build confidence with git operations.</p>
<p>The most effective approach I&rsquo;ve found is pairing experienced engineers with those still learning git hygiene. When someone sees a colleague quickly reorganize commits using interactive rebase, it demystifies the process and builds confidence.</p>
<p>For selective commit cleanup, I teach this pattern:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-sh" data-lang="sh"><span style="display:flex;"><span>git reset --soft HEAD~5
</span></span><span style="display:flex;"><span>git commit -m <span style="color:#e6db74">&#34;New message for the combined commit&#34;</span>
</span></span></code></pre></div><p>This approach lets engineers combine the last few commits while preserving earlier work that was already well-structured. It&rsquo;s particularly useful for cleaning up the &ldquo;fix typo&rdquo; and &ldquo;address code review feedback&rdquo; commits that naturally accumulate during development.</p>
<p>The key is making this feel like a normal part of the development process, not a burdensome extra step. I&rsquo;ve found success by incorporating commit cleanup time into sprint planning and explicitly discussing it during retrospectives.</p>
<h3 id="when-to-invest-in-advanced-git-skills">When to Invest in Advanced Git Skills</h3>
<p>Interactive rebase is where engineering teams often get stuck. It&rsquo;s powerful enough to completely reorganize commit history, but complex enough that many engineers avoid it entirely. As a leader, you need to decide whether this level of git sophistication is worth the investment for your team.</p>
<p>I&rsquo;ve found that teams working on critical infrastructure or open source projects benefit significantly from advanced git skills. The ability to craft a well-structured commit history pays dividends when you&rsquo;re debugging production issues or when external contributors need to understand your codebase.</p>
<p>For most product teams, however, I recommend focusing on simpler patterns that achieve 80% of the benefit with 20% of the complexity. Interactive rebase can be intimidating, and I&rsquo;d rather have consistent, good-enough commits than inconsistent attempts at perfection.</p>
<p>That said, having at least one team member comfortable with complex git operations is valuable. They become the &ldquo;git expert&rdquo; who can help others when commits get tangled, and they can teach advanced techniques during pair programming sessions.</p>
<p>The key is matching your git standards to your team&rsquo;s maturity and project needs. A startup moving fast might prioritize different things than a team maintaining financial systems.</p>
<h2 id="encouraging-experimentation-through-safety-nets">Encouraging Experimentation Through Safety Nets</h2>
<p>One of the biggest barriers to adopting better git practices is fear of making mistakes. Engineers worry that attempting to clean up their commits will result in lost work or broken history. Git stash becomes invaluable as both a technical tool and a confidence builder.</p>
<p>I encourage teams to use <code>git stash</code> liberally when learning new git techniques. It creates a safety net that makes experimentation feel safe:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-sh" data-lang="sh"><span style="display:flex;"><span>git stash  <span style="color:#75715e"># Save current work</span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Try some git operations</span>
</span></span><span style="display:flex;"><span>git stash pop  <span style="color:#75715e"># Restore work if needed</span>
</span></span></code></pre></div><p>This pattern is particularly useful when teaching engineers to clean up commits before submitting pull requests. They can stash their current work, experiment with interactive rebase or commit squashing, and easily recover if something goes wrong.</p>
<p>Beyond the technical benefits, stash encourages a more exploratory mindset around git. Engineers who feel comfortable experimenting with different approaches often develop better intuition for structuring their commits in the first place.</p>
<p>I&rsquo;ve also found that teams with good stash habits tend to have fewer &ldquo;work in progress&rdquo; commits cluttering their history. When engineers know they can easily save and restore work, they&rsquo;re more likely to commit only when they&rsquo;ve reached a logical checkpoint.</p>
<h2 id="creating-accountability-through-release-markers">Creating Accountability Through Release Markers</h2>
<p>Tags serve a purpose beyond marking releases—they create natural checkpoints for reflecting on code quality and team practices. When teams establish a regular tagging cadence, it forces conversations about what constitutes a release-worthy state.</p>
<p>I&rsquo;ve found that teams with good tagging habits naturally develop better commit discipline. Knowing that commits will be part of a tagged release creates a sense of permanence that encourages more thoughtful commit messages and cleaner history.</p>
<p>The process of creating a release tag often reveals quality issues that might otherwise slip through:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-sh" data-lang="sh"><span style="display:flex;"><span>git tag -a v1.2.0 -m <span style="color:#e6db74">&#34;Release: Enhanced user authentication&#34;</span>
</span></span><span style="display:flex;"><span>git push --follow-tags
</span></span></code></pre></div><p>When someone has to write a release message that summarizes the changes since the last tag, poorly structured commits become obvious. This creates a feedback loop that naturally improves commit quality over time.</p>
<p>Tags also enable powerful debugging workflows. When production issues arise, being able to quickly identify which release introduced a problem can dramatically reduce time to resolution. This capability becomes especially valuable as teams scale and the commit volume increases.</p>
<p>More importantly, tags create opportunities for celebration. Teams that regularly tag releases can look back at their progress and feel genuine accomplishment. This positive reinforcement helps sustain good commit habits even when deadlines pressure teams to cut corners.</p>
<h2 id="building-lasting-culture-change">Building Lasting Culture Change</h2>
<p>Establishing commit standards is ultimately about building a culture that values craftsmanship and communication. The technical practices matter, but the underlying mindset matters more.</p>
<p>The most successful transformations I&rsquo;ve led started with making the case for why commit quality matters to the team&rsquo;s goals. When engineers understand that better commits lead to faster debugging, easier code reviews, and more effective knowledge transfer, they become invested in improvement rather than resistant to new rules.</p>
<p>Implementation should be gradual and supportive rather than punitive. Start with commit message templates and basic guidelines. Celebrate improvements publicly during retrospectives. Use code review as a teaching opportunity rather than a barrier mechanism.</p>
<p>Most importantly, lead by example. When team members see you taking time to craft thoughtful commit messages and clean up your own commit history, it signals that these practices are genuinely valued rather than just bureaucratic overhead.</p>
<p>The payoff extends far beyond git hygiene. Teams that develop discipline around commit quality often improve in other areas too: code review thoroughness, documentation habits, and general attention to craft. These practices compound over time to create engineering cultures that can scale effectively and maintain high quality even under pressure.</p>
<p>Building this kind of culture takes patience and consistency, but the investment pays dividends in team velocity, code quality, and job satisfaction for years to come.</p>
<a href="https://medium.com/@victoriadotdev/subscribe" target="_blank" rel="noopener noreferrer" class="subscribe-button">
    <span class="subscribe-icon">📧</span>
    <span class="subscribe-text">Subscribe</span>
</a>

 ]]></content:encoded><enclosure url="https://victoria.dev/posts/building-code-quality-culture-through-commit-standards/cover_git-commit-art_hu3c5d7a1ac69b5f3a44f90a6688078cc1_89530_640x0_resize_box_3.png" length="50382" type="image/jpg"/></item><item><title>Building High-Performance Engineering Teams Through Feedback Loops</title><link>https://victoria.dev/posts/building-high-performance-engineering-teams-through-feedback-loops/</link><pubDate>Mon, 02 Jul 2018 10:08:41 -0400</pubDate><author>hello@victoria.dev (Victoria Drake)</author><guid>https://victoria.dev/posts/building-high-performance-engineering-teams-through-feedback-loops/</guid><description>How engineering leaders can implement structured feedback systems to accelerate team learning, improve code quality, and build sustainable development practices.</description><content:encoded>
&lt;img src="https://victoria.dev/posts/building-high-performance-engineering-teams-through-feedback-loops/cover_feedback-pbjreview_hu8e37b70b3728865778c1f4fe4adf7fff_53513_640x0_resize_box_3.png" width="640" height="288"/><![CDATA[ <p>The highest-performing engineering teams share one critical characteristic: they&rsquo;ve mastered rapid feedback loops. While many organizations talk about continuous improvement, few implement the systematic feedback mechanisms that make it possible.</p>
<p>The difference between teams that ship quality software consistently and those that struggle with technical debt and missed deadlines comes down to how quickly they can observe, learn, and adjust their approach. As an engineering leader, your job isn&rsquo;t to be the source of all feedback—it&rsquo;s to build systems that enable your team to continuously improve themselves.</p>
<h2 id="the-engineering-leadership-ooda-loop">The Engineering Leadership OODA Loop</h2>
<p>United States Air Force Colonel John Boyd developed the concept of the <a href="https://en.wikipedia.org/wiki/OODA_loop">OODA loop</a>, OODA being an acronym for <strong>observe, orient, decide, act</strong>. While originally designed for military strategy, this framework translates perfectly to engineering team leadership:</p>
<ul>
<li><strong>Observe:</strong> Gather data about team performance, code quality, delivery metrics, and team dynamics</li>
<li><strong>Orient:</strong> Analyze this information in the context of team goals, organizational constraints, and previous experience</li>
<li><strong>Decide:</strong> Choose specific interventions or changes to improve team performance</li>
<li><strong>Act:</strong> Implement these changes and measure their impact</li>
</ul>
<p>The power of the OODA loop for engineering leaders is in its emphasis on speed. Teams that can observe problems, orient around solutions, decide on actions, and act quickly will consistently outperform teams with slower feedback cycles. I&rsquo;ve seen engineering teams transform their delivery speed and quality by implementing systematic OODA loops at multiple levels: individual developer growth, code review processes, sprint retrospectives, and quarterly team health assessments.</p>
<h2 id="high-performance-team-feedback-systems">High-Performance Team Feedback Systems</h2>
<p>The most effective engineering teams I&rsquo;ve led implement feedback loops at multiple time scales. Here&rsquo;s what a comprehensive feedback system looks like:</p>
<p><strong>Daily feedback (hours):</strong></p>
<ul>
<li>Morning standup with updates on blockers and progress</li>
<li>Real-time pair programming and code review</li>
<li>Continuous integration feedback from automated tests</li>
<li>End-of-day team sync on tomorrow&rsquo;s priorities</li>
</ul>
<p><strong>Weekly feedback (days):</strong></p>
<ul>
<li>Sprint planning and backlog refinement</li>
<li>Code quality metrics review</li>
<li>Technical debt assessment</li>
<li>Team velocity and burndown analysis</li>
</ul>
<p><strong>Monthly feedback (weeks):</strong></p>
<ul>
<li>Sprint retrospectives with actions for improvements</li>
<li>Team health and satisfaction surveys</li>
<li>Architecture and technical direction discussions</li>
<li>Individual growth and career development conversations</li>
</ul>
<p><strong>Quarterly feedback (months):</strong></p>
<ul>
<li>Team performance against organizational goals</li>
<li>Process effectiveness and tooling evaluation</li>
<li>Long-term technical strategy adjustments</li>
<li>Team composition and skill gap analysis</li>
</ul>
<p>Each feedback loop serves a different purpose and operates at a different time scale. Your job as a leader is to ensure all these loops are functioning and feeding information up and down the hierarchy.</p>
<h2 id="building-team-feedback-culture">Building Team Feedback Culture</h2>
<p>Implementing effective feedback loops requires intentional leadership and systematic approach. Here&rsquo;s the framework I use to build high-performance engineering teams:</p>
<ol>
<li>Define clear, measurable team objectives</li>
<li>Create transparent planning and prioritization processes</li>
<li>Implement automation that provides rapid feedback</li>
<li>Build code review culture that accelerates learning</li>
<li>Set up regular process retrospectives</li>
<li>Close the loop: act on feedback systematically</li>
</ol>
<p>Each of these components reinforces the others, creating a self-improving system where the team becomes increasingly effective at identifying problems and implementing solutions.</p>
<h3 id="1-define-clear-measurable-team-objectives">1. Define Clear, Measurable Team Objectives</h3>
<p>Effective feedback loops require clear success criteria. Without concrete objectives, your team will struggle to know whether their improvements are actually working. As an engineering leader, you need to translate business goals into specific, measurable engineering outcomes.</p>
<ul>
<li><strong>Technical objectives:</strong> Delivery commitments with specific scope and timelines, quality metrics (bug rates, test coverage, performance benchmarks), and technical debt reduction goals with measurable impact</li>
<li><strong>Process objectives:</strong> Sprint velocity and predictability targets, code review turnaround time improvements, and deployment frequency and reliability goals</li>
<li><strong>Team health objectives:</strong> Individual skill development milestones, team satisfaction and engagement metrics, and knowledge sharing and documentation goals</li>
</ul>
<p>Make these objectives visible and regularly review progress using dashboards, team ceremonies, and one-on-one conversations. Treat objectives as hypotheses to test, not contracts to fulfill at all costs. When feedback indicates an objective is no longer relevant or achievable, adjust it.</p>
<h3 id="2-create-transparent-planning-and-prioritization-processes">2. Create Transparent Planning and Prioritization Processes</h3>
<p>High-performance teams excel at breaking down complex objectives into manageable, measurable work streams. This decomposition serves two purposes: it makes work achievable and it creates multiple feedback points where the team can course-correct.</p>
<ul>
<li><strong>Epic level (quarterly goals):</strong> Large initiatives that deliver significant business value, typically spanning 2-3 sprints. Example: &ldquo;Implement real-time collaboration features&rdquo;</li>
<li><strong>Story level (sprint goals):</strong> Deliverable features that can be completed within a sprint. Example: &ldquo;Users can see live cursor positions of other editors&rdquo;</li>
<li><strong>Task level (daily progress):</strong> Specific implementation work that can be completed in 1-2 days. Example: &ldquo;Implement WebSocket connection handling for cursor events&rdquo;</li>
</ul>
<p>Create feedback loops at each level:</p>
<ul>
<li><strong>Daily standups</strong> surface task-level blockers and progress</li>
<li><strong>Sprint reviews</strong> evaluate story completion and quality</li>
<li><strong>Quarterly planning</strong> assesses epic success and organizational alignment</li>
</ul>
<p>Teams perform best when planning is collaborative, transparent, and regularly revisited. Use tools like story mapping sessions, planning poker, and retrospective-driven backlog refinement to ensure the whole team understands and contributes to prioritization decisions. Treat plans as living documents that adjust quickly when feedback indicates priorities should shift.</p>
<h3 id="3-implement-automation-that-provides-rapid-feedback">3. Implement Automation That Provides Rapid Feedback</h3>
<p>Automation is critical for high-performance teams because it accelerates feedback loops and eliminates sources of inconsistency and error. Automation creates systems that provide immediate, reliable information about code quality and system health.</p>
<ul>
<li><strong>Immediate feedback (seconds to minutes):</strong> Pre-commit hooks that run tests, IDE integrations, and linting tools that enforce consistent standards</li>
<li><strong>Short-term feedback (minutes to hours):</strong> Continuous integration pipelines, automated security scanning, performance regression testing, and automated deployment to staging environments</li>
<li><strong>Medium-term feedback (hours to days):</strong> Automated monitoring and alerting for production systems, code quality metrics tracking, and performance monitoring alerts</li>
</ul>
<p>The key principle is &ldquo;shift left&rdquo;: catch problems as early as possible in the development cycle when they&rsquo;re cheaper and easier to fix. Start by documenting manual processes that the team repeats regularly, then prioritize automation based on frequency of use and the consequences of human error. The automation itself becomes a team learning exercise and creates shared ownership of the development process.</p>
<h3 id="4-build-code-review-culture-that-accelerates-learning">4. Build Code Review Culture That Accelerates Learning</h3>
<p>Code review is one of the most powerful feedback mechanisms available to engineering teams, but only when implemented as a learning and collaboration tool. High-performance teams use code review to accelerate knowledge transfer, maintain quality standards, and continuously improve their collective skills.</p>
<ul>
<li><strong>Establish clear expectations:</strong> Code review is required for all changes, should focus on code quality (not personal preferences), and both author and reviewer are responsible for the final quality</li>
<li><strong>Optimize for speed and quality:</strong> Target 24-hour turnaround time for initial review feedback, use automated tools to catch style issues, and provide specific, actionable feedback with examples</li>
<li><strong>Make reviews educational:</strong> Encourage questions and explanations in review comments, share alternative approaches and best practices, and rotate reviewers to spread knowledge across the team</li>
<li><strong>Measure and improve:</strong> Track review turnaround time and iteration cycles, monitor review feedback patterns to identify training opportunities, and regularly discuss review process effectiveness in retrospectives</li>
</ul>
<p>Create a culture where developers look forward to code review because they know they&rsquo;ll learn something and improve the overall codebase quality. When done well, code review becomes one of your most effective tools for maintaining technical standards and building team expertise.</p>
<h4 id="team-code-review-standards">Team Code Review Standards</h4>
<p>Here&rsquo;s the code review checklist I use with engineering teams. Adapt it collaboratively with your team to ensure buy-in and relevance to your specific context:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-markdown" data-lang="markdown"><span style="display:flex;"><span># Team Code Review Standards
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="font-weight:bold">**Functionality &amp; Requirements**</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">- [ ]</span> Implementation matches acceptance criteria and specifications
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">- [ ]</span> Edge cases and error conditions are properly handled
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">- [ ]</span> Changes are complete and don&#39;t break existing functionality
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">- [ ]</span> Performance impact has been considered and tested
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="font-weight:bold">**Code Quality &amp; Maintainability**</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">- [ ]</span> Code is readable and well-structured
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">- [ ]</span> Variable and function names clearly express intent
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">- [ ]</span> Complex logic is documented with comments
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">- [ ]</span> Code follows team style guidelines and patterns
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">- [ ]</span> No duplicate code or overly complex functions
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="font-weight:bold">**Testing &amp; Reliability**</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">- [ ]</span> Appropriate tests are included and pass
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">- [ ]</span> Test coverage meets team standards
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">- [ ]</span> Manual testing has been performed where applicable
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">- [ ]</span> Changes don&#39;t introduce security vulnerabilities
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="font-weight:bold">**Team Collaboration**</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">- [ ]</span> Pull request description clearly explains the change
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">- [ ]</span> Related documentation has been updated
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">- [ ]</span> Breaking changes are clearly communicated
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">- [ ]</span> Knowledge sharing opportunities have been identified
</span></span></code></pre></div><p>Make this checklist a living document that evolves based on team retrospectives and lessons learned. Regularly review and update the standards based on what issues you&rsquo;re catching (or missing) in production.</p>
<h3 id="5-set-up-regular-process-retrospectives">5. Set Up Regular Process Retrospectives</h3>
<p>Process retrospectives are where teams close the feedback loop by systematically improving how they work. High-performance teams treat retrospectives as their most important ceremony because it&rsquo;s where all other improvements originate. The most effective retrospectives happen at multiple cadences:</p>
<ul>
<li><strong>Sprint retrospectives (every 1-2 weeks):</strong> Focus on immediate process improvements and team dynamics using formats like &ldquo;Start, Stop, Continue&rdquo;</li>
<li><strong>Quarterly team health reviews:</strong> Deeper dive into team effectiveness, skill development, and strategic alignment with quantitative analysis of delivery metrics</li>
<li><strong>Post-incident reviews:</strong> Blameless analysis of production issues focusing on system improvements rather than individual accountability</li>
<li><strong>Process optimization sessions:</strong> Dedicated time to review and improve specific workflows like deployment, testing, or code review processes</li>
</ul>
<h4 id="effective-retrospective-framework">Effective Retrospective Framework</h4>
<p>Here are the key questions I use to guide productive team retrospectives:</p>
<ul>
<li><strong>Team performance review:</strong> How did we perform against our objectives? What factors contributed to successes? What blockers slowed us down? How effectively did our processes support our goals?</li>
<li><strong>Process effectiveness analysis:</strong> Which practices are working well? What processes are creating friction or waste? Where are we spending time on work that doesn&rsquo;t create value? What automation would have the biggest impact?</li>
<li><strong>Team health and growth:</strong> How well are we collaborating and communicating? What skills or knowledge gaps are limiting our effectiveness? Are team members feeling challenged and supported in their growth?</li>
<li><strong>Forward-looking improvements:</strong> What are the top 2-3 experiments we want to try next period? How will we measure success of these changes? What obstacles do we anticipate and how can we prepare for them?</li>
</ul>
<p>Make retrospectives action-oriented. Every retrospective should end with specific commitments about what the team will try differently, who will own those changes, and how success will be measured.</p>
<h3 id="6-close-the-loop-act-on-feedback-systematically">6. Close the Loop: Act on Feedback Systematically</h3>
<p>The most critical step in building high-performance teams is ensuring that feedback actually drives change. Many teams collect feedback but fail to systematically implement improvements. This is where engineering leadership makes the biggest difference.</p>
<ul>
<li><strong>Make changes visible and trackable:</strong> Document all process experiments and improvements in a shared space, track metrics before and after implementing changes, and celebrate successful improvements publicly to reinforce the feedback culture</li>
<li><strong>Create accountability for implementation:</strong> Assign owners for each improvement initiative, set specific timelines and success criteria, and review progress on improvements in regular team meetings</li>
<li><strong>Build improvement into regular workflow:</strong> Allocate dedicated time for process improvement work, include improvement tasks in sprint planning, and make process improvement a regular topic in one-on-one conversations</li>
<li><strong>Scale successful practices:</strong> Share effective improvements with other teams in the organization, document successful patterns for future reference, and build successful practices into onboarding for new team members</li>
</ul>
<p>Create a self-reinforcing cycle where the team becomes increasingly effective at identifying problems, implementing solutions, and measuring results. Teams that master this cycle become engines of continuous improvement that consistently outperform their peers.</p>
<p>Building high-performance engineering teams through feedback loops requires patience, consistency, and commitment from leadership. The investment pays enormous dividends in team velocity, code quality, job satisfaction, and organizational impact.</p>
 ]]></content:encoded><enclosure url="https://victoria.dev/posts/building-high-performance-engineering-teams-through-feedback-loops/cover_feedback-pbjreview_hu8e37b70b3728865778c1f4fe4adf7fff_53513_640x0_resize_box_3.png" length="28792" type="image/jpg"/></item><item><title>How to Replace a String with sed in Current and Recursive Subdirectories</title><link>https://victoria.dev/posts/how-to-replace-a-string-with-sed-in-current-and-recursive-subdirectories/</link><pubDate>Sat, 06 May 2017 20:04:53 +0800</pubDate><author>hello@victoria.dev (Victoria Drake)</author><guid>https://victoria.dev/posts/how-to-replace-a-string-with-sed-in-current-and-recursive-subdirectories/</guid><description>Master sed command-line tool to find and replace text across multiple files. Learn practical regex patterns for efficient bulk code refactoring in Linux/Unix.</description><content:encoded>
&lt;img src="https://victoria.dev/posts/how-to-replace-a-string-with-sed-in-current-and-recursive-subdirectories/cover_sed_hu0a982320f6b8be4e2c17737e58dbed29_189790_640x0_resize_box_3.png" width="640" height="320"/><![CDATA[ <p>I’ve probably run some variation of “find and replace across multiple files” thousands of times in my career. It’s one of those operations that seems straightforward until you’re staring at a codebase with 500,000 lines spread across 2,000 files, and you need to rename a function that’s used everywhere. Get it wrong, and you’re looking at hours of manual cleanup—or worse, subtle bugs that only surface in production.</p>
<p>Here&rsquo;s the approach I use, why some methods work better than others, and some tips that can save you from that sinking feeling when you realize you just broke prod.</p>
<h2 id="current-directory-only">Current Directory Only</h2>
<p>You can use sed by itself to make changes to files in the current directory, ignoring subdirectories.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-sh" data-lang="sh"><span style="display:flex;"><span>.
</span></span><span style="display:flex;"><span>├── index.html        <span style="color:#75715e"># Change this file</span>
</span></span><span style="display:flex;"><span>└── blog
</span></span><span style="display:flex;"><span>    ├── list.html     <span style="color:#75715e"># Don&#39;t change</span>
</span></span><span style="display:flex;"><span>    └── single.html   <span style="color:#75715e"># these files</span>
</span></span></code></pre></div><p>To replace all occurrences of “foo” with “bar” in files within the current directory:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-sh" data-lang="sh"><span style="display:flex;"><span>sed -i -- <span style="color:#e6db74">&#39;s/foo/bar/g&#39;</span> *
</span></span></code></pre></div><p>Here’s what each component of the command does:</p>
<ul>
<li><code>-i</code> will change the original, and stands for “in-place.”</li>
<li><code>s</code> is for substitute, so we can find and replace.</li>
<li><code>foo</code> is the string we’ll be taking away,</li>
<li><code>bar</code> is the string we’ll use instead today.</li>
<li><code>g</code> as in “global” means “all occurrences, please.”</li>
<li><code>*</code> denotes all file types. (No more rhymes. What a tease.)</li>
</ul>
<p>You can limit the operation to one file type, such as Python files, by using a matching pattern:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-sh" data-lang="sh"><span style="display:flex;"><span>sed -i -- <span style="color:#e6db74">&#39;s/foo/bar/g&#39;</span> *.py
</span></span></code></pre></div><h2 id="the-performant-recursive-pattern">The Performant Recursive Pattern</h2>
<p>Here’s a performant command for making changes in the current directory and all subdirectories:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>find . -type f -name <span style="color:#e6db74">&#34;*.py&#34;</span> -exec sed -i <span style="color:#e6db74">&#39;s/old_function_name/new_function_name/g&#39;</span> <span style="color:#f92672">{}</span> +
</span></span></code></pre></div><p>Let me break this down because each piece matters more than you might think:</p>
<ul>
<li><code>find .</code> starts from the current directory</li>
<li><code>-type f</code> only matches files (not directories)</li>
<li><code>-name &quot;*.py&quot;</code> filters to Python files (adjust the pattern for your needs)</li>
<li><code>-exec sed -i 's/old/new/g' {} +</code> runs sed on batches of files</li>
</ul>
<p>That <code>+</code> at the end instead of <code>\;</code> is crucial for performance. It batches multiple files into each sed call instead of spawning a new process for every single file. When you’re dealing with thousands of files, this can be the difference between a 5-second operation and a 5-minute one.</p>
<h2 id="the-safer-version-i-actually-use">The Safer Version I Actually Use</h2>
<p>But in the real world, it might not be best to run that command as-is. Here&rsquo;s a more accidentally-had-decaf-proof version:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#75715e"># First, see what we&#39;re dealing with</span>
</span></span><span style="display:flex;"><span>find . -type f -name <span style="color:#e6db74">&#34;*.py&#34;</span> -exec grep -l <span style="color:#e6db74">&#34;old_function_name&#34;</span> <span style="color:#f92672">{}</span> +
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Test on a single file first</span>
</span></span><span style="display:flex;"><span>find . -type f -name <span style="color:#e6db74">&#34;*.py&#34;</span> -exec grep -l <span style="color:#e6db74">&#34;old_function_name&#34;</span> <span style="color:#f92672">{}</span> + | head -1 | xargs sed -i.bak <span style="color:#e6db74">&#39;s/old_function_name/new_function_name/g&#39;</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># If that looks good, run on everything</span>
</span></span><span style="display:flex;"><span>find . -type f -name <span style="color:#e6db74">&#34;*.py&#34;</span> -exec sed -i.bak <span style="color:#e6db74">&#39;s/old_function_name/new_function_name/g&#39;</span> <span style="color:#f92672">{}</span> +
</span></span></code></pre></div><p>That <code>.bak</code> extension creates backup files automatically. Yes, you should be using version control, but I’ve seen too many scenarios where someone needed to quickly revert a change and of course they hadn&rsquo;t started with a clean working tree.</p>
<p>The backup files are easy to clean up later:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>find . -name <span style="color:#e6db74">&#34;*.bak&#34;</span> -delete
</span></span></code></pre></div><h2 id="when-gnu-sed-vs-bsd-sed-actually-matters">When GNU sed vs BSD sed Actually Matters</h2>
<p>Here’s something you run into when you switch from Linux to macOS: sed behaves differently. BSD sed (default on macOS) requires an argument to <code>-i</code>, even if it’s empty:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#75715e"># Linux (GNU sed)</span>
</span></span><span style="display:flex;"><span>sed -i <span style="color:#e6db74">&#39;s/old/new/g&#39;</span> file.txt
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># macOS (BSD sed) - this breaks</span>
</span></span><span style="display:flex;"><span>sed -i <span style="color:#e6db74">&#39;s/old/new/g&#39;</span> file.txt
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># macOS (BSD sed) - this works</span>
</span></span><span style="display:flex;"><span>sed -i <span style="color:#e6db74">&#39;&#39;</span> <span style="color:#e6db74">&#39;s/old/new/g&#39;</span> file.txt
</span></span><span style="display:flex;"><span><span style="color:#75715e"># or with backup</span>
</span></span><span style="display:flex;"><span>sed -i <span style="color:#e6db74">&#39;.bak&#39;</span> <span style="color:#e6db74">&#39;s/old/new/g&#39;</span> file.txt
</span></span></code></pre></div><p>You can also write portable versions:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#75715e"># Portable approach</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">if</span> sed --version 2&gt;/dev/null | grep -q GNU; <span style="color:#66d9ef">then</span>
</span></span><span style="display:flex;"><span>    find . -type f -name <span style="color:#e6db74">&#34;*.py&#34;</span> -exec sed -i <span style="color:#e6db74">&#39;s/old/new/g&#39;</span> <span style="color:#f92672">{}</span> +
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">else</span>
</span></span><span style="display:flex;"><span>    find . -type f -name <span style="color:#e6db74">&#34;*.py&#34;</span> -exec sed -i <span style="color:#e6db74">&#39;&#39;</span> <span style="color:#e6db74">&#39;s/old/new/g&#39;</span> <span style="color:#f92672">{}</span> +
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">fi</span>
</span></span></code></pre></div><p>Or use the backup approach everywhere since it works on both:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>find . -type f -name <span style="color:#e6db74">&#34;*.py&#34;</span> -exec sed -i.bak <span style="color:#e6db74">&#39;s/old/new/g&#39;</span> <span style="color:#f92672">{}</span> +
</span></span></code></pre></div><h2 id="handling-special-characters-without-losing-your-mind">Handling Special Characters Without Losing Your Mind</h2>
<p>When your search string contains slashes, quotes, or regex metacharacters, things get interesting.</p>
<p>Instead of fighting with escaping, change the delimiter:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#75715e"># Instead of this nightmare</span>
</span></span><span style="display:flex;"><span>sed -i <span style="color:#e6db74">&#39;s/https:\/\/old\.domain\.com\/api/https:\/\/new\.domain\.com\/api/g&#39;</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Use this</span>
</span></span><span style="display:flex;"><span>sed -i <span style="color:#e6db74">&#39;s|https://old.domain.com/api|https://new.domain.com/api|g&#39;</span>
</span></span></code></pre></div><p>You can use almost any character as the delimiter. I usually go with <code>|</code> for URLs and <code>#</code> for file paths or when I’m dealing with email addresses (it&rsquo;s easier to differentiate from a lowercase L).</p>
<p>For really complex patterns, sometimes it’s easier to put the sed script in a file:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#75715e"># In replace.sed</span>
</span></span><span style="display:flex;"><span>s|https://old.domain.com/api|https://new.domain.com/api|g
</span></span><span style="display:flex;"><span>s/DEBUG <span style="color:#f92672">=</span> True/DEBUG <span style="color:#f92672">=</span> False/g
</span></span><span style="display:flex;"><span>s/old_secret_key/new_secret_key/g
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Use it</span>
</span></span><span style="display:flex;"><span>find . -type f -name <span style="color:#e6db74">&#34;*.py&#34;</span> -exec sed -i.bak -f replace.sed <span style="color:#f92672">{}</span> +
</span></span></code></pre></div><p>This approach is also great for complex replacements that you’ll need to run multiple times or document for your team.</p>
<h2 id="performance-considerations-that-actually-matter">Performance Considerations That Actually Matter</h2>
<p>When you’re dealing with large codebases, performance starts to matter. Seemingly simple find-and-replace operations could take 20+ minutes on large repositories when done inefficiently.</p>
<p>The biggest performance killer is usually file selection. Don’t do this:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#75715e"># Slow—processes every file then filters</span>
</span></span><span style="display:flex;"><span>find . -type f -exec grep -l <span style="color:#e6db74">&#34;old_string&#34;</span> <span style="color:#f92672">{}</span> + | xargs sed -i <span style="color:#e6db74">&#39;s/old/new/g&#39;</span>
</span></span></code></pre></div><p>Do this instead:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#75715e"># Fast—filters files first</span>
</span></span><span style="display:flex;"><span>find . -type f -name <span style="color:#e6db74">&#34;*.py&#34;</span> -exec sed -i <span style="color:#e6db74">&#39;s/old/new/g&#39;</span> <span style="color:#f92672">{}</span> +
</span></span></code></pre></div><p>If you need to be more selective about which files to process, use multiple find conditions:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#75715e"># Only process Python files that aren&#39;t in virtual environments or build directories</span>
</span></span><span style="display:flex;"><span>find . -type f -name <span style="color:#e6db74">&#34;*.py&#34;</span> ! -path <span style="color:#e6db74">&#34;./venv/*&#34;</span> ! -path <span style="color:#e6db74">&#34;./build/*&#34;</span> ! -path <span style="color:#e6db74">&#34;./.git/*&#34;</span> -exec sed -i.bak <span style="color:#e6db74">&#39;s/old/new/g&#39;</span> <span style="color:#f92672">{}</span> +
</span></span></code></pre></div><h2 id="when-sed-isnt-the-right-tool">When sed Isn’t the Right Tool</h2>
<p>It&rsquo;s tempting to force sed to do things it’s not great at. Here’s when I reach for other tools:</p>
<p><strong>For complex transformations</strong>: Use a proper scripting language. A 50-line sed script could be 10 lines of Python and infinitely more readable.</p>
<p><strong>For structured data</strong>: If you’re modifying JSON, YAML, or XML, use tools that understand the format. sed doesn’t know about string escaping or nested structures.</p>
<p><strong>For very large files</strong>: sed loads the entire file into memory for each operation. For multi-gigabyte files, stream processing tools like awk might be better.</p>
<p><strong>For interactive replacements</strong>: Use your editor’s project-wide search and replace, or tools like <code>rg</code> (ripgrep) with interactive replacement.</p>
<h2 id="the-nuclear-option-parallel-processing">The Nuclear Option: Parallel Processing</h2>
<p>If you&rsquo;re dealing with truly massive codebases (millions of lines), you might need to get aggressive about performance:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#75715e"># Find all target files first</span>
</span></span><span style="display:flex;"><span>find . -type f -name <span style="color:#e6db74">&#34;*.py&#34;</span> ! -path <span style="color:#e6db74">&#34;./venv/*&#34;</span> &gt; /tmp/files_to_process
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Process them in parallel</span>
</span></span><span style="display:flex;"><span>cat /tmp/files_to_process | xargs -n <span style="color:#ae81ff">50</span> -P <span style="color:#ae81ff">8</span> sed -i.bak <span style="color:#e6db74">&#39;s/old/new/g&#39;</span>
</span></span></code></pre></div><p>That <code>-P 8</code> runs up to 8 sed processes in parallel, and <code>-n 50</code> processes 50 files per batch. Adjust based on your CPU cores and I/O capacity.</p>
<h2 id="testing-before-you-commit">Testing Before You Commit</h2>
<p>Here’s a thorough testing workflow for large replacements:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#75715e"># 1. Count occurrences before</span>
</span></span><span style="display:flex;"><span>find . -type f -name <span style="color:#e6db74">&#34;*.py&#34;</span> -exec grep -c <span style="color:#e6db74">&#34;old_string&#34;</span> <span style="color:#f92672">{}</span> + | awk -F: <span style="color:#e6db74">&#39;{sum+=$2} END {print sum}&#39;</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># 2. Run replacement with backups</span>
</span></span><span style="display:flex;"><span>find . -type f -name <span style="color:#e6db74">&#34;*.py&#34;</span> -exec sed -i.bak <span style="color:#e6db74">&#39;s/old_string/new_string/g&#39;</span> <span style="color:#f92672">{}</span> +
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># 3. Count occurrences after (should be 0)</span>
</span></span><span style="display:flex;"><span>find . -type f -name <span style="color:#e6db74">&#34;*.py&#34;</span> -exec grep -c <span style="color:#e6db74">&#34;new_string&#34;</span> <span style="color:#f92672">{}</span> + | awk -F: <span style="color:#e6db74">&#39;{sum+=$2} END {print sum}&#39;</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># 4. Spot check a few files</span>
</span></span><span style="display:flex;"><span>find . -name <span style="color:#e6db74">&#34;*.bak&#34;</span> | head -5 | <span style="color:#66d9ef">while</span> read backup; <span style="color:#66d9ef">do</span>
</span></span><span style="display:flex;"><span>    original<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;</span><span style="color:#e6db74">${</span>backup%.bak<span style="color:#e6db74">}</span><span style="color:#e6db74">&#34;</span>
</span></span><span style="display:flex;"><span>    echo <span style="color:#e6db74">&#34;=== </span>$original<span style="color:#e6db74"> ===&#34;</span>
</span></span><span style="display:flex;"><span>    diff <span style="color:#e6db74">&#34;</span>$backup<span style="color:#e6db74">&#34;</span> <span style="color:#e6db74">&#34;</span>$original<span style="color:#e6db74">&#34;</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">done</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># 5. Run tests</span>
</span></span><span style="display:flex;"><span>make test  <span style="color:#75715e"># or whatever your test command is</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># 6. If everything looks good, clean up backups</span>
</span></span><span style="display:flex;"><span>find . -name <span style="color:#e6db74">&#34;*.bak&#34;</span> -delete
</span></span></code></pre></div><h2 id="using-sed-in-real-world-scenarios">Using sed in Real-World Scenarios</h2>
<p><strong>API endpoint migration</strong>: Moving from v1 to v2 API endpoints meant updating hundreds of URL references across multiple repositories. The key was being selective about file types and using exact matches to avoid accidentally changing documentation or comments that mentioned the old API.</p>
<p><strong>Database migrations</strong>: After a database refactor for a Django application, sed came in handy for making changes to complex Django migration files. I used different sed patterns for different contexts—from Python to raw SQL—because the replacement patterns were slightly different in each case.</p>
<p><strong>Configuration key updates</strong>: When our configuration format changed, I needed to update key names across config files, code references, and documentation. This one required multiple passes with different patterns because the same logical key appeared in different syntactic contexts.</p>
<h2 id="the-debugging-workflow-that-saves-time">The Debugging Workflow That Saves Time</h2>
<p>When a sed operation goes wrong (and it will), here’s how I debug:</p>
<ol>
<li>
<p><strong>Check what files were actually modified</strong>:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>find . -name <span style="color:#e6db74">&#34;*.bak&#34;</span> -exec sh -c <span style="color:#e6db74">&#39;diff -q &#34;$1&#34; &#34;${1%.bak}&#34;&#39;</span> _ <span style="color:#f92672">{}</span> <span style="color:#ae81ff">\;</span> | head -10
</span></span></code></pre></div></li>
<li>
<p><strong>Look for unintended matches</strong>:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>find . -name <span style="color:#e6db74">&#34;*.bak&#34;</span> -exec sh -c <span style="color:#e6db74">&#39;diff &#34;$1&#34; &#34;${1%.bak}&#34;&#39;</span> _ <span style="color:#f92672">{}</span> <span style="color:#ae81ff">\;</span> | grep <span style="color:#e6db74">&#34;^&lt;&#34;</span> | sort | uniq -c | sort -nr
</span></span></code></pre></div></li>
<li>
<p><strong>Restore and try a more specific pattern</strong>:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>find . -name <span style="color:#e6db74">&#34;*.bak&#34;</span> -exec sh -c <span style="color:#e6db74">&#39;mv &#34;$1&#34; &#34;${1%.bak}&#34;&#39;</span> _ <span style="color:#f92672">{}</span> <span style="color:#ae81ff">\;</span>
</span></span></code></pre></div></li>
</ol>
<p>The pattern of creating backups, testing the results, and having a quick rollback strategy will save you countless hours. It’s especially important when you’re working on shared codebases where a mistake affects your entire team.</p>
<p>While sed operations might seem like they&rsquo;re just for simple text processing, they can help with critical steps in deployments, migrations, and refactoring efforts that affect real systems and real users. Taking the time to do them safely and efficiently pays dividends when you’re not scrambling to fix broken builds or track down subtle bugs that only show up in production.</p>
<p>If you found some value in this post, there&rsquo;s more! I write about high-output development processes and building maintainable systems in the AI age. You can get my posts in your inbox by subscribing below.</p>
<a href="https://medium.com/@victoriadotdev/subscribe" target="_blank" rel="noopener noreferrer" class="subscribe-button">
    <span class="subscribe-icon">📧</span>
    <span class="subscribe-text">Subscribe</span>
</a>

 ]]></content:encoded><enclosure url="https://victoria.dev/posts/how-to-replace-a-string-with-sed-in-current-and-recursive-subdirectories/cover_sed_hu0a982320f6b8be4e2c17737e58dbed29_189790_640x0_resize_box_3.png" length="87287" type="image/jpg"/></item><item><title>Victoria Drake's Blog</title><link>https://victoria.dev/posts/</link><pubDate>Sun, 01 Jan 2017 16:39:29 +0700</pubDate><author>hello@victoria.dev (Victoria Drake)</author><guid>https://victoria.dev/posts/</guid><description/><content:encoded></content:encoded></item></channel></rss>