<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Dmitry Khorev on Medium]]></title>
        <description><![CDATA[Stories by Dmitry Khorev on Medium]]></description>
        <link>https://medium.com/@dkhorev?source=rss-eb0dbd32d4c4------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Wed, 08 Apr 2026 12:35:02 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@dkhorev/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[AI Coding Assistants During Interviews: The Ethics Nobody Wants to Talk About]]></title>
            <link>https://levelup.gitconnected.com/ai-coding-assistants-during-interviews-the-ethics-nobody-wants-to-talk-about-ce68b7220073?source=rss-eb0dbd32d4c4------2</link>
            <guid isPermaLink="false">https://medium.com/p/ce68b7220073</guid>
            <category><![CDATA[apps]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[coding-interviews]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[technology]]></category>
            <dc:creator><![CDATA[Dmitry Khorev]]></dc:creator>
            <pubDate>Wed, 18 Mar 2026 15:12:29 GMT</pubDate>
            <atom:updated>2026-03-18T15:12:29.397Z</atom:updated>
            <content:encoded><![CDATA[<p><em>I built one of these tools. Here’s the honest debate I’ve been having with myself.</em></p><p><strong>TL;DR</strong> — Using AI during coding interviews is the most polarizing topic in tech hiring right now. I’ve heard every argument on both sides — and I’ve had to wrestle with them more than most, because I actually built a tool that does exactly this. This article isn’t a defense or an attack. It’s the real debate, with the strongest arguments from each side, and where I’ve landed after sitting with it for a long time.</p><p>A few months ago, I published an article about how I made a desktop app invisible to screen sharing. The technical content did well, but I knew what the real conversation would be about. Not the rendering tricks or the Electron APIs — the ethics. The inevitable questions: is this helping people lie their way into jobs? Will it get unqualified devs hired and ruin teams?</p><p>Fair questions. All of them.</p><p>I teased in an earlier article that the ethics conversation deserved its own space. So here it is. Not a hot take. Not a defensive rant from someone protecting their project. A genuine attempt to lay out both sides of this debate with the seriousness they deserve — and then tell you exactly where I stand.</p><p>Because I think the worst thing anyone can do with a topic like this is be wishy-washy about it.</p><h3>The Elephant in the Room: I Have Skin in This Game</h3><p>Before anything else, let me be transparent about something.</p><p>I built <a href="https://getezzi.com/">Ezzi</a>. It’s an open-source desktop application that provides AI-powered assistance during technical interviews through an invisible overlay. I designed it. I wrote a lot of the code. I put it out there under an MIT license for anyone to use.</p><p>So when I write about the ethics of AI coding assistants in interviews, I’m not a neutral observer. I’m someone who actively contributed to the thing people are debating about.</p><p>I say this upfront because I think intellectual honesty matters — especially on a topic like this. You should know my bias going in. And I should be willing to name it.</p><p>That said, having skin in the game also means I’ve spent more time thinking about these implications than most commenters dropping hot takes on Twitter. I’ve been turning this question over in my head since I started building the thing — most often at 2 AM, staring at my commit history.</p><figure><img alt="The internal debate that comes with building something controversial." src="https://cdn-images-1.medium.com/max/1024/1*zrwlhQ0LKVZMQQ5SOzquUg.png" /><figcaption>The internal debate that comes with building something controversial.</figcaption></figure><p>Let me walk you through that debate.</p><h3>The Case Against: Why Using AI in Interviews Is Cheating</h3><p>I’m going to start with the strongest arguments against using tools like <a href="https://github.com/GetEzzi/ezzi-app">Ezzi</a> in interviews. And I mean the genuinely strong ones — not the strawmen that are easy to dismiss.</p><h4>It’s Fundamentally Deceptive</h4><p>This is the hardest argument to counter, and I want to give it the respect it deserves.</p><p>When you sit down for a coding interview, there’s an implicit agreement. The interviewer believes they’re evaluating <em>your</em> abilities — your problem-solving, your coding skills, and your ability to think under pressure. When you use an AI assistant they can’t see, you’re breaking that agreement.</p><p>It doesn’t matter if the tool is just “helping you think.” It doesn’t matter if you “already knew the answer.” The interviewer didn’t consent to evaluating you plus an AI. They consented to evaluating you.</p><p>This is a real argument. Deception is deception, regardless of whether you think the system being deceived is fair.</p><h4>It Undermines Meritocracy</h4><p>Tech has always prided itself — sometimes naively — on being a meritocracy. The idea that anyone with the skills can get the job, regardless of their background, pedigree, or connections.</p><p>If AI interview tools become widespread, you introduce a new variable: who has access to the best tools, who knows they exist, and who’s comfortable using them. That’s not meritocracy. That’s a different kind of advantage that has nothing to do with engineering ability.</p><p>The dev who grinds LeetCode for three months and solves the problem on their own gets the same result as the dev who opens an overlay and copies the AI’s approach. Except one of them actually demonstrated the skill.</p><h4>It Hurts Candidates Who Don’t Use Tools</h4><p>This one keeps me up at night more than the others.</p><p>If AI-assisted interviewing becomes normalized — even quietly — you create a prisoners’ dilemma. Candidates who refuse to use these tools on principle are now competing against candidates who do. The “honest” candidates are actively disadvantaged.</p><p>That’s not a hypothetical. That’s game theory. And the equilibrium it reaches is one where everyone feels pressured to use the tools just to keep up, regardless of their personal ethics.</p><h4>It Could Lead to Genuinely Unqualified Hires</h4><p>Let’s not dance around this one. If someone who can’t actually code uses AI to pass a technical interview and gets hired, that’s bad for everyone involved — the team that carries the weight, the company that made a costly mis-hire, and the person themselves, because they’ll be exposed eventually.</p><p>The argument isn’t that this happens every time. The argument is that it <em>can</em> happen, and when it does, the consequences are real. Teams carry dead weight. Projects suffer. And the person who got hired under false pretenses lives in constant anxiety.</p><h4>The Slippery Slope Is Real</h4><p>Today it’s coding interviews. Tomorrow it’s design reviews. Next week it’s architecture discussions in your actual job. If we normalize hiding AI assistance, where does it stop?</p><p>There’s a legitimate concern that accepting AI cheating in interviews sends a cultural signal: it’s okay to misrepresent your abilities as long as you get the outcome you want.</p><figure><img alt="The core tension: where does preparation end and deception begin?" src="https://cdn-images-1.medium.com/max/1024/1*0Xo0PiOHBQSeq8lSHl8OUw.png" /><figcaption>The core tension: where does preparation end and deception begin?</figcaption></figure><p>I’ve laid these out as strongly as I can because I think they deserve it. If you read those arguments and thought “yeah, that’s pretty convincing” — good. They should be. They represent real concerns from real people who care about the integrity of the hiring process.</p><p>Now let me show you the other side.</p><h3>The Case For: Why the System Is Already Broken</h3><h4>The Interview Doesn’t Measure What It Claims To</h4><p>Let’s start with the foundational crack in the whole argument: coding interviews, as they exist today, are terrible at predicting job performance.</p><p>This isn’t my opinion. Google’s own internal research found that their interview scores had almost no correlation with on-the-job performance. They published this. They admitted it publicly.</p><p>The average coding interview asks you to solve algorithmic puzzles under time pressure, on a shared screen, while a stranger watches you type. When was the last time your actual job looked like that?</p><p>At work, you Google things and read documentation. You ask colleagues, use Copilot or ChatGPT, and take breaks when you need them. You think about a problem overnight and come back with a solution in the morning.</p><p>The interview strips away every single tool and process that makes you effective at your actual job, then judges you on the result. It’s like evaluating a carpenter by making them build a cabinet with no power tools and one hand tied behind their back, then concluding they “lack craftsmanship.”</p><h4>Companies Already Use AI on Their Side</h4><p>Here’s the double standard nobody wants to acknowledge.</p><p>Companies use AI to screen your resume before a human ever sees it and to generate interview questions. Some even analyze your facial expressions during video calls, and their code graders were built with AI assistance.</p><p>But you — the candidate — are expected to show up with nothing but your brain and your typing speed.</p><p>The power asymmetry is staggering. The company brings every tool in its arsenal to evaluate you, while insisting you face the evaluation naked. And then we call it “fair”.</p><figure><img alt="The tools companies use to evaluate you vs. the tools you’re allowed to use. Notice the imbalance." src="https://cdn-images-1.medium.com/max/1024/1*ocFlcr6ZIlOnoG42umqA_g.png" /><figcaption>The tools companies use to evaluate you vs. the tools you’re allowed to use. Notice the imbalance.</figcaption></figure><h4>Memorizing LeetCode Is Already “Cheating” — We Just Don’t Call It That</h4><p>Let me ask you something. A candidate spends three months grinding LeetCode. They encounter a problem in their interview that they’ve already solved before — maybe multiple times. They reproduce the solution from memory.</p><p>Is that cheating?</p><p>Most people would say no. That’s preparation. That’s hard work. That’s the game.</p><p>But what exactly did that demonstrate? Not problem-solving ability — they already knew the answer. Not the ability to think under pressure — they were just recalling a pattern. Not engineering skill — LeetCode problems have almost nothing to do with building real software.</p><p>What it demonstrated is that they had the time, resources, and awareness to grind a specific platform for months. That’s a socioeconomic filter, not a skills filter.</p><p>A candidate who works a full-time job and has kids at home doesn’t have three months to grind LeetCode. They have evenings, maybe weekends, and whatever energy is left after everything else.</p><p>We’ve already accepted a version of “cheating” — we just drew the line in a place that advantages certain people and pretends that’s merit.</p><h4>Preparation Has Always Been a Spectrum</h4><p>Think about this spectrum for a moment:</p><ul><li>Reading a book about algorithms before an interview. Nobody objects to this.</li><li>Practicing on LeetCode with hundreds of problems, including ones known to appear at specific companies. Still considered fine.</li><li>Doing mock interviews with a friend who works at the target company and can hint at what topics will come up. Happens constantly.</li><li>Paying for a coaching service that teaches you company-specific patterns. This costs $200–500/hour, and nobody bats an eye.</li><li>Using ChatGPT to explain a concept you don’t understand while prepping the night before. Broadly accepted in 2026.</li><li>Using an AI tool during the actual interview to assist with problem-solving. Suddenly this is where everyone draws the line?</li></ul><p>Where’s the line? Everyone draws it somewhere different, and everyone thinks their line is the obvious one.</p><h4>The Job Doesn’t Require What the Interview Tests</h4><p>This is the argument I keep coming back to.</p><p>If I hire a dev to build features, fix bugs, and maintain a codebase — and in that actual job, they’ll have access to AI tools, documentation, their team, and the internet — then what am I actually learning by testing them without any of those things?</p><p>I’m testing their ability to perform in an artificial environment that doesn’t exist anywhere in professional software development.</p><p>You know what would be a better interview? Give someone a real task. Let them use whatever tools they want. Judge the output. That’s closer to what they’ll actually do at the job.</p><p>But companies don’t do this because it’s harder to standardize, harder to scale, and harder to evaluate. So they stick with the format they have, even tho they know it doesn’t work, and then act shocked when people optimize around it.</p><figure><img alt="Left: how you actually work. Right: how they test you. Spot the disconnect." src="https://cdn-images-1.medium.com/max/1024/1*v_MN5UzusLcvG1--_cwL5A.png" /><figcaption>Left: how you actually work. Right: how they test you. Spot the disconnect.</figcaption></figure><h3>The Arguments Nobody Makes (But Should)</h3><p>Before I tell you where I land, there are a few points that usually get lost in this debate.</p><h4>The “Unqualified Hire” Problem Is Overstated</h4><p>People love the scenario where someone who can’t code at all uses AI to get hired and then gets exposed. And sure, that can happen.</p><p>But here’s the thing — it also happens without AI. People lie on resumes, get referrals from friends and coast through interviews, or freeze and bomb them despite being excellent engineers. The signal-to-noise ratio of interviews has always been terrible.</p><p>The question isn’t whether AI tools introduce the possibility of bad hires. It’s whether they make it meaningfully worse than the already-broken system.</p><p>And if a company’s entire quality gate is a single technical interview that can be defeated by a tool, the problem isn’t the tool. It’s the gate.</p><h4>Companies Could Fix This Tomorrow</h4><p>If companies are genuinely worried about AI-assisted interviews, they have options.</p><p>They could do take-home projects with follow-up discussions. They could do pair programming sessions focused on collaboration, not performance. They could use trial periods or contract-to-hire arrangements. They could ask candidates to explain existing code instead of writing new code. They could evaluate portfolios, open-source contributions, and real work.</p><p>Most companies don’t do these things because they’re expensive and time-consuming. They’d rather keep the cheap, broken format and blame candidates for optimizing around it.</p><p>That tells you everything you need to know about how seriously companies take interview integrity.</p><h4>The AI Genie Is Out of the Bottle</h4><p>This is the pragmatic argument, and it’s powerful even if it’s uncomfortable.</p><p>AI coding assistants exist. They’re getting better. They’re not going away. Within five years, every developer will use AI tools as naturally as they use an IDE today.</p><p>Building a hiring process that depends on candidates not having access to AI is like building a hiring process in 2010 that depends on candidates not having access to Google. It’s a losing bet.</p><p>The companies that adapt their interviews to account for AI will hire better. The companies that try to police it will waste energy on an unwinnable arms race while their interview process becomes even less predictive than it already is.</p><h3>Where I Land</h3><p>Okay. I’ve presented both sides as honestly as I can. Here’s my actual position.</p><p>I think the current coding interview system is broken beyond repair, and AI tools are a symptom of that brokenness, not the cause.</p><p>I think using AI during an interview, in its current form, <em>is</em> deceptive. I won’t pretend otherwise. The interviewer expects to evaluate you without assistance, and using a hidden tool violates that expectation. That’s a real ethical issue, and I don’t dismiss it.</p><p>But I also think the system those interviews exist in is itself deeply unfair, and has been for years. It advantages people with time, money, and access. It tests skills that don’t predict job performance. And it creates artificial conditions that don’t exist anywhere in actual software engineering.</p><p>So here’s my specific stance: <strong>The moral weight of using AI in an interview is proportional to the fairness of the interview itself.</strong></p><p>If a company gives you a thoughtful, realistic assessment — a take-home project, a pair programming session, a system design discussion where they genuinely care about how you think — using AI to fake your way through that is wrong. Full stop. That company made an effort to evaluate you fairly, and you’re undermining it.</p><p>But if a company throws you into a 45-minute LeetCode grinder as an early elimination round — one that can knock you out before you ever get to show your system design thinking or how you work with a team? The moral calculus shifts. That interview wasn’t designed to evaluate you fairly. It was designed to be cheap and scalable. Optimizing around it — by any means — is a rational response to an irrational system.</p><p>I’m not saying it’s <em>right</em>. I’m saying the wrongness is shared.</p><figure><img alt="The responsibility isn’t one-sided. It never was." src="https://cdn-images-1.medium.com/max/1024/1*ebGO4Ql-yHNLgc3z9DycoQ.png" /><figcaption>The responsibility isn’t one-sided. It never was.</figcaption></figure><h4>Why I Built Ezzi Anyway</h4><p>The obvious question hangs over the whole project: if you see the ethical issues, why build it?</p><p>Because I think tools like Ezzi have a role that goes beyond the interview itself.</p><p>When I started working on <a href="https://github.com/GetEzzi/ezzi-app">Ezzi</a>, the primary use case in my head wasn’t “help people cheat.” It was “help people learn.” The AI-powered overlay can explain concepts, walk through approaches, and help someone understand <em>why</em> a solution works — not just <em>what</em> the solution is.</p><p>Used as a learning aid — during practice sessions, during LeetCode grinding, during mock interviews — it’s genuinely useful. It’s like having a patient tutor who can explain dynamic programming for the fifteenth time without getting frustrated.</p><p>Does it also work during real interviews? Yes. I designed the invisible overlay specifically to work in that context. And I’m not going to pretend I didn’t know what people would use it for.</p><p>But I don’t think the tool is the problem. The ethics live in the use, not the existence.</p><p>That’s my position, and I hold it knowing full well that some people will disagree — loudly.</p><h3>What Interviews Should Actually Look Like</h3><p>I don’t want to just criticize without offering something constructive. Here’s what I think good technical hiring looks like:</p><p><strong>Evaluate real work.</strong> Look at someone’s GitHub. Read their blog posts. Review a project they built. This tells you more about someone’s engineering ability than any whiteboard problem.</p><p><strong>Test collaboration, not isolation.</strong> Pair programming sessions where you work together on a problem. How do they communicate? How do they handle getting stuck? How do they respond to feedback? These are the skills that actually matter on the job.</p><p><strong>Let candidates use their tools.</strong> If a candidate will use AI tools on the job — and they will — let them use AI tools in the interview. Evaluate their ability to leverage those tools effectively. That’s a skill too.</p><p><strong>Use longer, realistic assessments.</strong> A paid take-home project that resembles actual work. A trial day where they work on a real (sanitized) problem. These are more expensive, yes. But they actually work.</p><p><strong>Focus on the conversation, not the code.</strong> Ask a candidate to explain their approach, their tradeoffs, their reasoning. That’s much harder to fake with AI than the code itself.</p><p>Some companies already do this. They’re the ones making the best hires. They’re also the ones who don’t need to worry about AI interview tools, because their process tests things AI can’t fake.</p><figure><img alt="This is what a good interview looks like. Collaborative, realistic and human." src="https://cdn-images-1.medium.com/max/1024/1*Sss8tfTgj8tqu6I6OgxeWw.png" /><figcaption>This is what a good interview looks like. Collaborative, realistic and human.</figcaption></figure><h3>The Uncomfortable Truth</h3><p>Here’s the thing nobody wants to say out loud.</p><p>The debate about AI in interviews isn’t really about AI. It’s about the fact that our industry built a hiring process that doesn’t work, made it the standard, and now doesn’t know what to do when technology exposes how fragile it is.</p><p>Candidates didn’t break the system. They’re just responding to the incentives the system created. When you make the hiring gate a performance that can be gamed, people will game it. With LeetCode grinding. With coaching services. With AI tools. With whatever the next thing is.</p><p>If we want to fix this, we need to stop blaming the people trying to get hired and start fixing the process that hires them.</p><p>That’s where the conversation should be. Not “should candidates be allowed to use AI?” but “why is our interview process so vulnerable to AI that it breaks?”</p><p>If you want to check out Ezzi yourself — whether as a learning aid, a practice tool, or just to see how the invisible overlay works under the hood — it’s open source and free: <a href="https://github.com/GetEzzi/ezzi-app">github.com/GetEzzi/ezzi-app</a></p><p><em>This is a debate that’s going to get louder, not quieter. I’d love to hear where you land — especially if you disagree with me. The comment section is open. Let’s have the conversation the industry keeps avoiding.</em></p><p>I hope this was helpful. Good luck, and happy engineering!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ce68b7220073" width="1" height="1" alt=""><hr><p><a href="https://levelup.gitconnected.com/ai-coding-assistants-during-interviews-the-ethics-nobody-wants-to-talk-about-ce68b7220073">AI Coding Assistants During Interviews: The Ethics Nobody Wants to Talk About</a> was originally published in <a href="https://levelup.gitconnected.com">Level Up Coding</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How I Made a Desktop App Invisible to Screen Sharing (Electron + OS-Level Tricks)]]></title>
            <link>https://levelup.gitconnected.com/how-i-made-a-desktop-app-invisible-to-screen-sharing-electron-os-level-tricks-5734513c1e67?source=rss-eb0dbd32d4c4------2</link>
            <guid isPermaLink="false">https://medium.com/p/5734513c1e67</guid>
            <category><![CDATA[typescript]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[apps]]></category>
            <category><![CDATA[javascript]]></category>
            <dc:creator><![CDATA[Dmitry Khorev]]></dc:creator>
            <pubDate>Fri, 13 Mar 2026 14:33:05 GMT</pubDate>
            <atom:updated>2026-03-13T14:33:05.678Z</atom:updated>
            <content:encoded><![CDATA[<p><em>A technical deep-dive into OS-level window management, Electron APIs, and the platform-specific tricks that make an overlay truly invisible</em></p><p><strong>TL;DR </strong>— Building a desktop overlay that’s invisible to screen sharing requires going deeper than Electron’s surface-level APIs. You need to understand how each OS captures windows, use the right combination of BrowserWindow properties, handle platform-specific edge cases (especially macOS Zoom), and manage focus behavior so the overlay doesn’t leave traces. This article walks through the actual techniques I used building <a href="https://github.com/GetEzzi/ezzi-app">Ezzi</a>.</p><h3>The Problem</h3><p>Here’s the situation. You’re building a desktop overlay — an always-on-top transparent window that floats above other applications. Maybe it’s a productivity tool, a note-taking overlay, a teleprompter, or in my case, an interview assistant called <a href="https://github.com/GetEzzi/ezzi-app">Ezzi</a>.</p><p>The overlay needs to be visible to the user but completely invisible when they share their screen. Not “mostly hidden.” Not “kinda transparent.” Invisible. As in the person on the other end of the screen share sees absolutely nothing.</p><p>This sounds simple until you actually try to do it. And then you discover that “invisible to screen sharing” means different things on different operating systems, different video conferencing apps, and different capture methods.</p><figure><img alt="What You See" src="https://cdn-images-1.medium.com/max/1024/1*PxTdB1L82XLpCVThshpCGA.png" /><figcaption>What You See</figcaption></figure><figure><img alt="What They See" src="https://cdn-images-1.medium.com/max/1024/1*81yvX0cpwqCht733_ogWHw.png" /><figcaption>What They See</figcaption></figure><p>Let me walk you through how I solved this.</p><h3>How Screen Sharing Actually Works</h3><p>Before we can hide from screen capture, we need to understand how screen capture works. And it’s different on every platform.</p><h4>Windows: Three Capture APIs</h4><p>Windows has evolved its screen capture APIs over the years. Modern apps use one of three approaches.</p><p><strong>BitBlt (Legacy)</strong></p><p>The oldest method. BitBlt copies pixels from one device context to another. Many older screen recording tools still use this. It operates on the GDI layer and captures whatever’s rendered to the screen device context.</p><p><strong>Desktop Duplication API (DXGI)</strong></p><p>Introduced in Windows 8. This is what most modern screen sharing apps use — Zoom, Teams, Discord. It captures the final composited desktop output directly from the GPU, which means it sees everything the DWM (Desktop Window Manager) composites together.</p><p><strong>Windows Graphics Capture (WGC)</strong></p><p>The newest API, introduced in Windows 10. This is what UWP apps and newer tools use. It can capture specific windows or the entire screen, and it’s the most “aware” of window properties and system policies.</p><p>The key thing here: all three methods respect a Windows API called SetWindowDisplayAffinity. This is our way in.</p><h4>macOS: CGWindowListCreateImage and ScreenCaptureKit</h4><p>macOS screen capture is conceptually simpler but has its own complications.</p><p>The traditional approach uses CGWindowListCreateImage, which composites windows into a single image. Apps specify which windows to include using various option flags. Most screen sharing tools on macOS use this under the hood.</p><p>Apple introduced ScreenCaptureKit in macOS 12.3 as a more modern alternative. It provides filter-based capture where apps can include or exclude specific windows and applications. Think of it as a query-based approach to screen capture — you describe what you want, and the system delivers it.</p><p>On macOS, the equivalent mechanism is the window’s sharingType property. Setting it to NSWindowSharingNone tells the window server to exclude the window from capture operations.</p><p>Both APIs respect this property. But — and this is a significant “but” — some apps have historically used more aggressive capture methods that bypass it. More on that when we get to the Zoom problem.</p><figure><img alt="Diagram showing how Windows DWM and macOS WindowServer handle screen capture requests, with content-protected windows being excluded from the composited output" src="https://cdn-images-1.medium.com/max/1024/1*8y-gg8srdkitBNHAZZeDrg.png" /><figcaption>Diagram showing how Windows DWM and macOS WindowServer handle screen capture requests, with content-protected windows being excluded from the composited output</figcaption></figure><h3>The Electron Foundation</h3><p>Ezzi is built with Electron 37+ and React 19+. The overlay is an Electron BrowserWindow with a very specific set of properties. Let me walk through the ones that matter.</p><h4>The Base Window Configuration</h4><p>In Ezzi, the window config lives in a dedicated config file. Here’s what the base settings look like for the Live Interview mode:</p><pre>// electron/window-config/configs/LiveInterviewConfig.ts<br>export const LiveInterviewConfig: WindowConfig = {<br>  baseSettings: {<br>    width: 500,<br>    height: 420,<br>    alwaysOnTop: true,<br>    show: true,<br>    fullscreenable: false,<br>    focusable: true,<br>    enableLargerThanScreen: true,<br>    frame: false,<br>    hasShadow: false,<br>    transparent: true,<br>    skipTaskbar: true,<br>    titleBarStyle: &#39;hidden&#39;,<br>    backgroundColor: &#39;#00000000&#39;,<br>    type: &#39;panel&#39;,<br>    paintWhenInitiallyHidden: true,<br>    movable: true,<br>  },<br>  // ...behavior configs<br>};</pre><p>Let’s break down why each property matters.</p><p><strong>transparent: true</strong> with <strong>backgroundColor: ‘#00000000’</strong> makes the window fully transparent. The #00000000 is ARGB with zero alpha. Without this combo, you get a white or system-colored rectangle. Not very invisible.</p><p><strong>frame: false</strong> with <strong>titleBarStyle: ‘hidden’</strong> removes the native window frame entirely. No title bar, no close/minimize/maximize buttons.</p><p><strong>hasShadow: false</strong> is a macOS-specific detail. Windows get a subtle drop shadow by default. Even a transparent window can cast a shadow, and that shadow shows up in screen captures.</p><p><strong>type: ‘panel’</strong> is one of the less obvious but important settings. On macOS, this creates a panel-style window that behaves differently from a regular window. Panels don’t appear in the Dock’s window list and have different focus behavior — they can float above other apps without stealing the active state.</p><p><strong>skipTaskbar: true</strong> hides the window from the Windows taskbar. You don’t want a mysterious app icon appearing while someone’s watching your screen.</p><p><strong>enableLargerThanScreen: true</strong> lets the window extend beyond the screen edges. This is useful when moving the overlay around — you don’t want it snapping back when it partially goes off-screen.</p><p>The window creation in the main process then merges these settings with platform-specific options:</p><pre>// electron/main.ts<br>const windowsSpecificOptions =<br>  process.platform === &#39;win32&#39; &amp;&amp; platformConfigForCreation.win32<br>    ? { thickFrame: platformConfigForCreation.win32.thickFrame }<br>    : {};<br><br>const baseWindowSettings: Electron.BrowserWindowConstructorOptions = {<br>  ...windowConfig.baseSettings,<br>  ...windowsSpecificOptions,<br>  x: state.currentX,<br>  y: 50,<br>  webPreferences: {<br>    nodeIntegration: false,<br>    contextIsolation: true,<br>    preload: isDev<br>      ? path.join(__dirname, &#39;../dist-electron/preload.js&#39;)<br>      : path.join(__dirname, &#39;preload.js&#39;),<br>    scrollBounce: true,<br>  },<br>};<br><br>state.mainWindow = new BrowserWindow(baseWindowSettings);t</pre><p>Notice thickFrame: false on Windows. This disables the Windows “thick frame” style that allows resizing from the edges. Without it, Windows would render resize handles around the window — visible artifacts on a supposedly invisible overlay.</p><p>This gives you a transparent, frameless, always-on-top window. But it’s still visible to screen capture. The next step is the critical one.</p><h4>The Invisibility Switch</h4><p>Right after creating the window, we apply the three core protection calls:</p><pre>// electron/main.ts<br>state.mainWindow.setContentProtection(true);<br>state.mainWindow.setVisibleOnAllWorkspaces(true, {<br>  visibleOnFullScreen: true,<br>});<br>state.mainWindow.setAlwaysOnTop(true, &#39;screen-saver&#39;, 1);</pre><p>setContentProtection(true) is the core mechanism. Under the hood, it does platform-specific work.</p><p><strong>On Windows</strong>, it calls:</p><pre>// What Electron does internally on Windows<br>SetWindowDisplayAffinity(hwnd, WDA_EXCLUDEFROMCAPTURE);</pre><p>WDA_EXCLUDEFROMCAPTURE was introduced in Windows 10. It tells the DWM to exclude this window from all capture operations. The window simply doesn’t exist as far as screen sharing is concerned.</p><p>Before this flag existed, the older WDA_MONITOR flag was the only option. It would replace the window content with a black rectangle in captures rather than making it fully invisible. WDA_EXCLUDEFROMCAPTURE is strictly better — the window doesn’t show up at all, not even as a blank region.</p><p><strong>On macOS</strong>, it calls:</p><pre>// What Electron does internally on macOS<br>[nsWindow setSharingType:NSWindowSharingNone];</pre><p>NSWindowSharingNone tells the window server that this window should not be included in any screen sharing or capture operations. It’s been available since early macOS versions, tho the behavior has evolved across different macOS releases.</p><p>The interesting part is that this is a single Electron API call abstracting away two completely different OS-level mechanisms. You don’t need to write native code or use node-ffi. Electron handles the platform dispatch.</p><h4>The Always-On-Top Level</h4><p>Setting alwaysOnTop: true in the constructor is just the starting point. Electron lets you specify the *level* of always-on-top, and the level matters a lot. In <a href="https://getezzi.com/">Ezzi</a>, all visibility states use ’screen-saver’:</p><pre>// electron/window-config/configs/LiveInterviewConfig.ts<br>showBehavior: {<br>  opacity: 1,<br>  ignoreMouseEvents: false,<br>  skipTaskbar: true,<br>  alwaysOnTop: true,<br>  alwaysOnTopLevel: &#39;screen-saver&#39;,<br>  visibleOnAllWorkspaces: true,<br>  visibleOnFullScreen: true,<br>  focusable: true,<br>  contentProtection: true,<br>},</pre><p>Electron supports several window levels: ’normal’, ’floating’, ’torn-off-menu`, ’modal-panel’, ’main-menu’, ’status’, ’pop-up-menu’, and ’screen-saver’.</p><p>For an interview overlay, you want it above everything — including the full-screen mode that some interview platforms use. ’screen-saver’ is the highest level before system-level windows, so it stays on top even when the interview platform goes full-screen.</p><p>The visibleOnAllWorkspaces: true with visibleOnFullScreen: true ensures the overlay persists across all macOS virtual desktops and doesn’t disappear when another app enters full-screen mode.</p><figure><img alt="Electron BrowserWindow inspector with the key properties highlighted — transparent, frameless, content protection enabled" src="https://cdn-images-1.medium.com/max/1024/1*JHsz3jWiw2Vks7yIBI9v5A.png" /><figcaption>Electron BrowserWindow inspector with the key properties highlighted — transparent, frameless, content protection enabled</figcaption></figure><h3>Config-Driven Visibility States</h3><p>One design decision that paid off was treating visibility as a configuration problem rather than scattering setContentProtection and setIgnoreMouseEvents calls throughout the codebase.</p><p><a href="https://github.com/GetEzzi/ezzi-app">Ezzi</a> defines a WindowVisibilityConfig type:</p><pre>// electron/window-config/WindowConfig.ts<br>export interface WindowVisibilityConfig {<br>  opacity: number;<br>  ignoreMouseEvents: boolean;<br>  skipTaskbar: boolean;<br>  alwaysOnTop: boolean;<br>  alwaysOnTopLevel:<br>    | &#39;normal&#39;<br>    | &#39;floating&#39;<br>    | &#39;torn-off-menu&#39;<br>    | &#39;modal-panel&#39;<br>    | &#39;main-menu&#39;<br>    | &#39;status&#39;<br>    | &#39;pop-up-menu&#39;<br>    | &#39;screen-saver&#39;;<br>  visibleOnAllWorkspaces: boolean;<br>  visibleOnFullScreen: boolean;<br>  focusable: boolean;<br>  contentProtection: boolean;<br>}</pre><p>Then a factory applies these configs atomically:</p><pre>// electron/window-config/WindowConfigFactory.ts<br>private applyVisibilityConfig(<br>  window: BrowserWindow,<br>  config: WindowVisibilityConfig,<br>): void {<br>  const currentBounds = window.getBounds();<br><br>  if (config.ignoreMouseEvents) {<br>    window.setIgnoreMouseEvents(true, { forward: true });<br>  } else {<br>    window.setIgnoreMouseEvents(false);<br>  }<br>  window.setFocusable(config.focusable);<br>  window.setSkipTaskbar(config.skipTaskbar);<br>  window.setAlwaysOnTop(config.alwaysOnTop, config.alwaysOnTopLevel, 1);<br>  window.setVisibleOnAllWorkspaces(config.visibleOnAllWorkspaces, {<br>    visibleOnFullScreen: config.visibleOnFullScreen,<br>  });<br>  window.setContentProtection(config.contentProtection);<br>  window.setOpacity(config.opacity);<br><br>  window.setBounds(currentBounds);<br>}</pre><p>Notice the setBounds(currentBounds) at the end. Without it, some Electron API calls can subtly shift the window position or size. Saving and restoring the bounds ensures the window stays exactly where it was.</p><p>This approach means every visibility state — show, hide, queue with screenshots, queue empty — is a declarative config object. Adding a new state means adding a new config, not tracking down scattered imperative calls.</p><h3>Click-Through Transparency</h3><p>An invisible overlay is useless if it blocks mouse interaction with the apps underneath. You need the overlay to be “click-through” — mouse events should pass right through it to whatever window is below.</p><p>Look at the ignoreMouseEvents field in the config. It switches between states depending on what the user is doing:</p><pre>// electron/window-config/configs/LiveInterviewConfig.ts<br>showBehavior: {<br>  ignoreMouseEvents: false,  // interactive - user can click the overlay<br>  focusable: true,<br>  // ...<br>},<br>hideBehavior: {<br>  ignoreMouseEvents: true,   // click-through - events pass to app below<br>  focusable: true,<br>  // ...<br>},<br>queueWithScreenshots: {<br>  ignoreMouseEvents: true,   // click-through while displaying screenshots<br>  focusable: false,          // also not focusable at all<br>  // ...<br>},</pre><p>The { forward: true } option passed to setIgnoreMouseEvents(true, { forward: true }) is important on macOS. Without it, the window simply stops receiving mouse events entirely. With it, mouse events are forwarded to the window below, but the overlay still receives mouse enter/leave events for hover detection.</p><p>The toggle between states happens via Cmd+B:</p><pre>// electron/shortcuts.ts<br>globalShortcut.register(&#39;CommandOrControl+B&#39;, () =&gt; {<br>  this.deps.toggleMainWindow();<br>});</pre><p>Which triggers this:</p><pre>// electron/main.ts<br>let isToggling = false;<br>function toggleMainWindow(): void {<br>  if (isToggling) {<br>    return;<br>  }<br><br>  isToggling = true;<br><br>  if (state.isWindowVisible) {<br>    hideMainWindow();<br>  } else {<br>    showMainWindow();<br>  }<br><br>  setTimeout(() =&gt; {<br>    isToggling = false;<br>  }, 300);<br>}</pre><p>The debounce is important. Without it, rapid key presses can trigger multiple show/hide transitions that get the window into a confused state. The 300ms cooldown prevents this.</p><p>Press Cmd+B to interact with the overlay. Press it again to make it click-through. This keeps the user’s interaction with the interview platform natural — they’re not alt-tabbing between windows, which would look suspicious.</p><h3>Focus Detection Prevention</h3><p>Here’s a subtle problem that most people miss when building overlays. Even if your window is invisible to screen capture, it can still be detected through focus behavior.</p><p>Interview platforms can monitor which window has focus. If your overlay steals focus when it becomes interactive, the interview platform detects that its window lost focus — a potential red flag.</p><h4>What Interview Platforms Can See</h4><p>Browser-based interview platforms typically monitor two things:</p><pre>// What interview platforms might run<br>document.addEventListener(&#39;blur&#39;, () =&gt; {<br>  reportFocusLoss() // &quot;User switched away from this tab&quot;<br>})<br><br>document.addEventListener(&#39;visibilitychange&#39;, () =&gt; {<br>  if (document.hidden) {<br>    reportTabSwitch() // &quot;Tab is no longer visible&quot;<br>  }<br>})</pre><p>The blur event fires when the browser window loses focus to another application. The visibilitychange event fires when the tab becomes hidden (e.g., the user switches to a different tab).</p><h4>How We Handle Focus</h4><p>The key insight is in how Ezzi shows the window. Look at this line in showMainWindow():</p><pre>// electron/main.ts<br>function showMainWindow(): void {<br>  if (!state.mainWindow?.isDestroyed()) {<br>    if (state.windowPosition &amp;&amp; state.windowSize) {<br>      state.mainWindow.setBounds({<br>        ...state.windowPosition,<br>        ...state.windowSize,<br>      });<br>    }<br><br>    const configFactory = WindowConfigFactory.getInstance();<br><br>    state.mainWindow.setOpacity(0);<br>    state.mainWindow.showInactive();  // &lt;-- this is the key<br><br>    // Apply appropriate behavior based on current view<br>    configFactory.applyShowBehavior(state.mainWindow, state.appMode);<br><br>    state.isWindowVisible = true;<br>    state.shortcutsHelper?.registerAllShortcuts();<br>  }<br>}</pre><p>showInactive() instead of show(). This is the critical difference. show() activates the window and gives it focus. showInactive() makes the window visible without stealing focus from the currently active application. The browser running the interview platform stays focused.</p><p>The opacity trick (setOpacity(0) before showing, then restored to 1 by the config) prevents a brief visual flash during the state transition.</p><p>On top of this, Ezzi re-applies platform-specific configurations whenever the window gains focus, to prevent the OS from overriding our settings:</p><pre>// electron/main.ts<br>function handleWindowFocus(): void {<br>  preserveWindowConfiguration();<br>}<br><br>function preserveWindowConfiguration(): void {<br>  if (!state.mainWindow || state.mainWindow.isDestroyed()) {<br>    return;<br>  }<br><br>  const windowConfig = WindowConfigFactory.getInstance().getConfig(<br>    state.appMode,<br>  );<br>  const platformConfig = windowConfig.behavior.platformSpecific;<br><br>  if (process.platform === &#39;darwin&#39; &amp;&amp; platformConfig.darwin) {<br>    state.mainWindow.setWindowButtonVisibility(<br>      platformConfig.darwin.windowButtonVisibility,<br>    );<br>    state.mainWindow.setHiddenInMissionControl(<br>      platformConfig.darwin.hiddenInMissionControl,<br>    );<br>    state.mainWindow.setBackgroundColor(platformConfig.darwin.backgroundColor);<br>    state.mainWindow.setHasShadow(platformConfig.darwin.hasShadow);<br>  }<br><br>  if (process.platform === &#39;win32&#39; &amp;&amp; platformConfig.win32) {<br>    state.mainWindow.setMenuBarVisibility(false);<br>    state.mainWindow.setAutoHideMenuBar(true);<br>  }<br>}</pre><p>Why re-apply on every focus gain? Because macOS and Windows can reset certain window properties during focus transitions. I learned this the hard way — the overlay would occasionally appear with a shadow or window buttons after the system handled a focus event. Re-applying the config on focus ensures consistency.</p><h4>Platform-Specific Hiding</h4><p>On macOS, there’s extra work to truly hide the window from the system UI:</p><pre>// electron/window-config/configs/LiveInterviewConfig.ts<br>platformSpecific: {<br>  darwin: {<br>    hiddenInMissionControl: true,<br>    windowButtonVisibility: false,<br>    backgroundColor: &#39;#00000000&#39;,<br>    hasShadow: false,<br>  },<br>  win32: {<br>    thickFrame: false,<br>  },<br>},</pre><p>hiddenInMissionControl: true prevents the overlay from appearing in Mission Control (the three-finger-swipe-up view). Without this, swiping up during an interview would reveal the overlay window.</p><h4>Why Native Windows Beat Browser Extensions</h4><p>This is actually one of the key advantages of a native desktop overlay vs. a browser extension approach. Browser extensions that open new tabs, pop-ups, or iframes can trigger visibilitychange and blur events. A native Electron window sitting on top of the browser does not trigger visibilitychange at all — the browser tab is still technically visible and in the foreground.</p><p>The blur event is the one we need to manage carefully. And because we control the Electron window’s focus behavior at the OS level — with showInactive(), type: ‘panel’, and careful focus event handling — we have the tools to do it properly.</p><h3>Shortcut-Based Interaction Model</h3><p>Ezzi’s entire interaction model is built on global keyboard shortcuts. This isn’t just a convenience — it’s a stealth requirement. Every mouse click on the overlay risks a focus change. Keyboard shortcuts don’t.</p><p>Here’s the full shortcut registration:</p><pre>// electron/shortcuts.ts<br>public registerAllShortcuts(): void {<br>  globalShortcut.unregisterAll();<br><br>  globalShortcut.register(&#39;CommandOrControl+H&#39;, () =&gt; {<br>    void (async () =&gt; {<br>      const mainWindow = this.deps.getMainWindow();<br>      if (mainWindow) {<br>        const screenshotPath = await this.deps.takeScreenshot();<br>        const preview = await this.deps.getImagePreview(screenshotPath);<br>        mainWindow.webContents.send(&#39;screenshot-taken&#39;, {<br>          path: screenshotPath,<br>          preview,<br>        });<br>      }<br>    })();<br>  });<br><br>  globalShortcut.register(&#39;CommandOrControl+Enter&#39;, () =&gt; {<br>    void this.deps.processingHelper?.processScreenshotsSolve();<br>  });<br><br>  globalShortcut.register(&#39;CommandOrControl+G&#39;, () =&gt; {<br>    this.deps.processingHelper?.cancelOngoingRequests();<br>    this.deps.clearQueues();<br>    this.deps.setView(&#39;queue&#39;);<br>    const mainWindow = this.deps.getMainWindow();<br>    if (mainWindow &amp;&amp; !mainWindow.isDestroyed()) {<br>      mainWindow.webContents.send(&#39;reset-view&#39;);<br>    }<br>  });<br><br>  globalShortcut.register(&#39;CommandOrControl+Left&#39;, () =&gt; {<br>    this.deps.moveWindowLeft();<br>  });<br>  globalShortcut.register(&#39;CommandOrControl+Right&#39;, () =&gt; {<br>    this.deps.moveWindowRight();<br>  });<br>  globalShortcut.register(&#39;CommandOrControl+Down&#39;, () =&gt; {<br>    this.deps.moveWindowDown();<br>  });<br>  globalShortcut.register(&#39;CommandOrControl+Up&#39;, () =&gt; {<br>    this.deps.moveWindowUp();<br>  });<br><br>  globalShortcut.register(&#39;CommandOrControl+B&#39;, () =&gt; {<br>    this.deps.toggleMainWindow();<br>  });<br>}</pre><p>But here’s the important detail — when the window is hidden, we unregister everything except Cmd+B:</p><pre>// electron/shortcuts.ts<br>public registerVisibilityShortcutOnly(): void {<br>  globalShortcut.unregisterAll();<br><br>  setTimeout(() =&gt; {<br>    globalShortcut.register(&#39;CommandOrControl+B&#39;, () =&gt; {<br>      this.deps.toggleMainWindow();<br>    });<br>  }, 500);<br>}</pre><p>The 500ms delay prevents a race condition where unregistering and re-registering the same shortcut too quickly can fail silently in Electron.</p><p>Why unregister shortcuts when hidden? Two reasons. First, it frees up key combinations like Cmd+H and Cmd+G for other apps. Second, it prevents accidental actions — you don’t want a stray Cmd+Enter generating a solution while the overlay is hidden.</p><h3>Window Movement and Dynamic Sizing</h3><p>An overlay that can’t be repositioned is impractical. But window movement introduces its own challenges.</p><h4>Keyboard-Driven Positioning</h4><p>Instead of drag-to-move (which requires the window to be interactive and focused), <a href="https://github.com/GetEzzi/ezzi-app">Ezzi</a> uses keyboard shortcuts. The movement functions handle horizontal and vertical axes differently:</p><pre>// electron/main.ts<br>state.step = 60; // pixels per move<br><br>function moveWindowHorizontal(updateFn: (x: number) =&gt; number): void {<br>  if (!state.mainWindow) {<br>    return;<br>  }<br>  state.currentX = updateFn(state.currentX);<br>  state.mainWindow.setPosition(<br>    Math.round(state.currentX),<br>    Math.round(state.currentY),<br>  );<br>}<br><br>function moveWindowVertical(updateFn: (y: number) =&gt; number): void {<br>  if (!state.mainWindow || !state.windowSize) {<br>    return;<br>  }<br><br>  const newY = updateFn(state.currentY);<br>  // Allow window to go 2/3 off screen in either direction<br>  const maxUpLimit = (-(state.windowSize.height || 0) * 2) / 3;<br>  const maxDownLimit =<br>    state.screenHeight + ((state.windowSize.height || 0) * 2) / 3;<br><br>  if (newY &gt;= maxUpLimit &amp;&amp; newY &lt;= maxDownLimit) {<br>    state.currentY = newY;<br>    state.mainWindow.setPosition(<br>      Math.round(state.currentX),<br>      Math.round(state.currentY),<br>    );<br>  }<br>}</pre><p>Notice the vertical movement has boundary constraints — the window can go 2/3 off screen in either direction. This lets users tuck the overlay mostly off-screen while keeping a sliver visible, without losing it entirely. Horizontal movement has no limits, which lets the window slide to a different monitor.</p><p>The 60-pixel step size is a UX tradeoff. Too small and repositioning feels sluggish. Too large and fine-tuning becomes impossible. 60px felt right after testing on multiple screen sizes.</p><h4>Gaze Alignment</h4><p>This is a subtle UX detail that affects stealth. The overlay should sit near the coding area of the interview platform. If the user is constantly looking at the top-right corner of the screen while typing in the center, it looks suspicious on camera.</p><p>Smart default positioning places the overlay adjacent to where the code editor typically sits in platforms like HackerRank and CoderPad. The keyboard shortcuts let users fine-tune this for each platform’s layout.</p><h4>Content-Driven Resizing</h4><p>The overlay also needs to resize based on content. A solution with a detailed thought process needs more space than a compact code snippet:</p><pre>// electron/main.ts<br>function setWindowDimensions(<br>  width: number,<br>  height: number,<br>  _source: string,<br>): void {<br>  if (state.mainWindow &amp;&amp; !state.mainWindow.isDestroyed()) {<br>    const [currentX, currentY] = state.mainWindow.getPosition();<br>    const primaryDisplay = screen.getPrimaryDisplay();<br>    const workArea = primaryDisplay.workAreaSize;<br>    const maxWidth = Math.floor(workArea.width * 0.4);<br><br>    const newWidth = Math.min(width + 32, maxWidth);<br>    const newHeight = Math.ceil(height);<br><br>    let adjustedX = currentX;<br>    let adjustedY = currentY;<br>    if (isWindowCompletelyOffScreen(currentX, currentY, newWidth, newHeight)) {<br>      adjustedX = Math.max(0, (workArea.width - newWidth) / 2);<br>      adjustedY = Math.max(0, (workArea.height - newHeight) / 2);<br>    }<br><br>    state.mainWindow.setBounds({<br>      x: adjustedX,<br>      y: adjustedY,<br>      width: newWidth,<br>      height: newHeight,<br>    });<br><br>    state.currentX = adjustedX;<br>    state.currentY = adjustedY;<br>    state.windowPosition = { x: adjustedX, y: adjustedY };<br>    state.windowSize = { width: newWidth, height: newHeight };<br>  }<br>}</pre><p>A few things worth noting here. The 40% max width cap prevents the overlay from covering more than two-fifths of the screen — enough to display code, not enough to completely block the interview platform. The + 32 adds padding for content that sits flush against the edges.</p><p>The off-screen detection is a safety net. If the window somehow ends up completely off screen (can happen when an external monitor disconnects), it auto-centers instead of becoming lost forever.</p><p>The React renderer calculates the needed size based on its content, sends it to the main process via IPC, and the main process handles the actual resize with these constraints applied.</p><figure><img alt="Three screenshots showing the overlay at different sizes — compact mode with just code, medium mode with code and explanation, and expanded mode with full thought process" src="https://cdn-images-1.medium.com/max/1024/1*dkBCL7dUcxcFTrWu3cX4Ww.png" /><figcaption>Three screenshots showing the overlay at different sizes — compact mode with just code, medium mode with code and explanation, and expanded mode with full thought process</figcaption></figure><h3>The macOS Zoom Problem</h3><p>This is the single biggest platform-specific challenge I encountered building Ezzi. And it’s worth understanding in detail because it reveals how different capture implementations can behave differently even on the same OS.</p><h4>What Happens</h4><p>Most screen sharing on macOS uses either CGWindowListCreateImage or ScreenCaptureKit. Both respect NSWindowSharingNone. So when you set setContentProtection(true), the overlay is invisible in Teams, Chime, Google Meet, and all browser-based platforms.</p><p>But Zoom on macOS is different.</p><p>Zoom implemented its own capture pipeline that, in its default configuration, captures the raw display output rather than compositing individual windows. In this mode, it can see windows that have NSWindowSharingNone set, because it’s not querying the window server for individual window content — it’s reading the display buffer directly.</p><p>Think of it this way: NSWindowSharingNone tells the window server “don’t include me when someone asks for window data.” But if someone reads the entire display output instead of asking for window data, that instruction gets bypassed.</p><h4>The Workaround</h4><p>Zoom provides a setting called “Advanced Capture” with window filtering. When this is enabled, Zoom switches from display-level capture to window-level capture that correctly respects NSWindowSharingNone.</p><p>From the user’s perspective, the setup is:</p><p>1. Open Zoom Settings<br>2. Go to Screen Share<br>3. Enable “Use advanced capture with window filtering”<br>4. When sharing, select the specific application window (not “Desktop” or “Entire Screen”)</p><p>With this configuration, Zoom captures only the specified windows and correctly excludes content-protected windows from the output.</p><p>This is why Ezzi’s documentation notes that macOS Zoom requires “Advanced Capture with window filtering.” It’s not a limitation of our implementation — it’s a quirk of how Zoom handles capture on macOS.</p><figure><img alt="Zoom settings panel showing the “Advanced Capture” option with window filtering enabled" src="https://cdn-images-1.medium.com/max/640/1*pygszkrqGxSSCqbEqsolGQ.png" /><figcaption>Zoom settings panel showing the “Advanced Capture” option with window filtering enabled</figcaption></figure><h4>Why Windows Doesn’t Have This Issue</h4><p>On Windows, SetWindowDisplayAffinity(WDA_EXCLUDEFROMCAPTURE) is enforced at the DWM level. The Desktop Window Manager sits between applications and the display hardware. When a window has WDA_EXCLUDEFROMCAPTURE set, the DWM excludes it from all output paths — whether that’s BitBlt, DXGI Desktop Duplication, or Windows Graphics Capture.</p><p>There’s no way for Zoom (or any capture application) to bypass this on Windows, because the exclusion happens before the captured data is even made available to the application. It’s a fundamentally different architecture than macOS, and in this specific case, it works in our favor.</p><h4>Staying Ahead of Zoom Updates</h4><p>Zoom updates its capture implementation regularly. This is one of those things that requires ongoing testing. What works with Zoom 6.0 might behave differently with Zoom 6.1. I monitor Zoom release notes and test each major version to ensure compatibility.</p><p>My recommendation for anyone building similar tools: automate what you can, but accept that manual testing against third-party apps is unavoidable. Keep a test matrix and check it with every release.</p><h3>Process-Level Stealth</h3><p>Making the window invisible is only part of the story. Some detection approaches don’t look at what’s visible on screen — they look at what processes are running on the system.</p><h4>Custom Application Naming</h4><p>Ezzi supports build-time configuration of the application name. There’s a build script that patches the Electron builder config:</p><pre>// scripts/build-config.js<br>const productName = process.env.PRODUCT_NAME || &#39;Ezzi&#39;;<br><br>const safeProductName = productName<br>  .replace(/[^a-zA-Z0-9\s-]/g, &#39;&#39;)<br>  .replace(/\s+/g, &#39;-&#39;);<br><br>const packagePath = path.join(__dirname, &#39;..&#39;, &#39;package.json&#39;);<br>const packageJson = JSON.parse(fs.readFileSync(packagePath, &#39;utf8&#39;));<br><br>packageJson.build.productName = productName;<br><br>packageJson.build.mac.artifactName =<br>  `${safeProductName}-Mac-\${arch}-\${version}.\${ext}`;<br>packageJson.build.win.artifactName =<br>  `${safeProductName}-Windows-\${version}.\${ext}`;<br><br>fs.writeFileSync(packagePath, JSON.stringify(packageJson, null, 2) + &#39;\n&#39;);</pre><p>Usage is simple:</p><pre>PRODUCT_NAME=&quot;System Monitor&quot; npm run build</pre><p>This changes the process name in Activity Monitor and Task Manager, the application name in the dock/taskbar, and the artifact name for distribution.</p><p>If someone inspects your running processes, they see “System Monitor” instead of anything that would raise questions.</p><p>The script sanitizes the name with regex to create a safe filename version. No special characters, spaces become hyphens. This matters because the artifact name becomes the installer filename and the application bundle name on macOS. There’s no rename hack or post-build patching — the name goes into package.json before electron-builder runs, so it’s compiled with the custom identity from the start.</p><h3>Testing Across Platforms</h3><p>Here’s what made this whole project particularly challenging: you can’t just test on one machine and call it done. The behavior varies across:</p><ul><li>Windows 10 vs Windows 11</li><li>macOS 12 through macOS 15</li><li>Zoom vs Teams vs Google Meet vs Chime</li><li>Native desktop apps vs browser-based platforms</li><li>Different versions of the same app (Zoom changes capture behavior between updates)</li></ul><h4>My Testing Approach</h4><p><strong>Automated build verification.</strong> CI/CD pipelines build for Windows and macOS on every PR. This catches configuration regressions.</p><p><strong>Manual capture testing.</strong> For each platform combination, I share the screen with a second device and verify the overlay is invisible. This can’t be fully automated because screen capture behavior depends on the capture application, its version, and its settings.</p><p><strong>Focus monitoring.</strong> Run browser developer tools on interview platform mock pages and verify no blur or visibilitychange events fire when interacting with the overlay. This is scriptable and can be part of a semi-automated test suite.</p><p><strong>Process inspection.</strong> Verify the custom app name appears correctly in Activity Monitor and Task Manager across different build configurations.</p><p>The matrix of combinations is large, and each OS or app update can potentially change things. This is ongoing maintenance, not a one-time implementation.</p><figure><img alt="Testing matrix spreadsheet showing platform combinations (OS x capture app x interview platform) with pass/fail status" src="https://cdn-images-1.medium.com/max/1024/1*vbaMKt-086sSijyvLsoDwA.png" /><figcaption>Testing matrix spreadsheet showing platform combinations (OS x capture app x interview platform) with pass/fail status</figcaption></figure><h3>What I Learned</h3><p>Building this taught me a few things that might be useful to anyone working with Electron overlays or OS-level window management.</p><p><strong>Electron’s abstractions are good but not complete.</strong> setContentProtection works well for the common case. But understanding what it does at the OS level — SetWindowDisplayAffinity on Windows, setSharingType on macOS — helps you debug platform-specific issues and understand why certain edge cases exist.</p><p><strong>Windows is actually easier here.</strong> WDA_EXCLUDEFROMCAPTURE is enforced at the DWM compositor level. No capture application can bypass it. macOS is more nuanced because different capture APIs have different levels of respect for NSWindowSharingNone, and apps like Zoom can choose which API to use.</p><p><strong>Config-driven window management pays off.</strong> Scattering setIgnoreMouseEvents and setContentProtection calls throughout the codebase leads to bugs. Declarative config objects for each visibility state made the system predictable and easy to extend.</p><p><strong>Focus management is harder than rendering.</strong> Making the window invisible to capture is relatively straightforward — it’s mostly one API call. Making sure it doesn’t trigger focus-loss detection on the interview platform is the more subtle engineering challenge. showInactive(), type: ‘panel’, and re-applying configs on focus events — these details make the difference.</p><p><strong>Testing is never done.</strong> Every OS update, every Zoom update, every new interview platform version can change the capture behavior. You need an ongoing testing strategy, not just a one-time verification pass.</p><h3>The Bigger Picture</h3><p>The techniques here aren’t unique to interview assistants. Any application that needs to display sensitive or private content while the user is sharing their screen benefits from these patterns. Password managers, medical information overlays, confidential communications tools — they all face the same fundamental challenge.</p><p>The OS-level mechanisms exist precisely because there are legitimate use cases for windows that shouldn’t be captured. Electron makes them accessible through a clean API, and with the right combination of window properties, focus management, and platform-specific handling, you can build an overlay that’s truly invisible.</p><p>If you want to see this in action or dig through the actual implementation, Ezzi is open source: <a href="https://github.com/GetEzzi/ezzi-app">https://github.com/GetEzzi/ezzi-app</a>.</p><p><em>If you’re building Electron overlays or working with screen capture APIs, I’d love to hear about your platform-specific war stories. The edge cases are endless, and sharing knowledge makes everyone’s implementations better.</em></p><p>I hope this was helpful. Good luck, and happy engineering!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5734513c1e67" width="1" height="1" alt=""><hr><p><a href="https://levelup.gitconnected.com/how-i-made-a-desktop-app-invisible-to-screen-sharing-electron-os-level-tricks-5734513c1e67">How I Made a Desktop App Invisible to Screen Sharing (Electron + OS-Level Tricks)</a> was originally published in <a href="https://levelup.gitconnected.com">Level Up Coding</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Why Technical Interviews Are Broken (And What We Can Do About It)]]></title>
            <link>https://levelup.gitconnected.com/why-technical-interviews-are-broken-and-what-we-can-do-about-it-8cfcf8f64b15?source=rss-eb0dbd32d4c4------2</link>
            <guid isPermaLink="false">https://medium.com/p/8cfcf8f64b15</guid>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[career-advice]]></category>
            <category><![CDATA[coding-interviews]]></category>
            <category><![CDATA[tech-industry]]></category>
            <dc:creator><![CDATA[Dmitry Khorev]]></dc:creator>
            <pubDate>Wed, 18 Feb 2026 18:20:23 GMT</pubDate>
            <atom:updated>2026-02-18T18:20:23.943Z</atom:updated>
            <content:encoded><![CDATA[<p><em>The gap between what we test in interviews and what we actually do at work has never been wider.</em></p><p><strong>TL;DR</strong> — Technical interviews in their current form don’t predict job performance. We’ve known this for years, yet the industry keeps doubling down on LeetCode-style assessments that burn out good developers and reward the wrong skills. I compare the most common interview formats, explain why each falls short, and argue for what a better process could look like. I also talk about why tools like <a href="https://github.com/GetEzzi/ezzi-app">Ezzi</a> exist in the first place — they’re a symptom, not the disease.</p><h3>The Interview That Changed My Mind</h3><p>A few years ago, I bombed a technical interview at a company I genuinely wanted to join. The role was a senior backend position. I had years of production experience, had shipped systems handling real traffic, and could talk architecture all day.</p><p>The interview? Reverse a linked list on a whiteboard. Then implement a sliding window algorithm. Then do it again, but optimized.</p><p>I stumbled. Not because I didn’t know the concepts — I did. But the pressure of performing live, on a timer, with someone watching my every keystroke, made my brain lock up. I walked out feeling like a fraud. Two weeks later I landed a different role at a comparable company, where the interview was a system design discussion and a take-home project. Same me, same skills, wildly different outcome.</p><p>That experience stuck with me. It made me question what these interviews are actually measuring. And the more I looked into it, the more I realized: the entire system is broken.</p><figure><img alt="What interviews test vs. what the job actually looks like." src="https://cdn-images-1.medium.com/max/1024/1*E1uPWDUqzl-xrKi-Er956Q.png" /><figcaption>What interviews test vs. what the job actually looks like.</figcaption></figure><h3>The Disconnect Nobody Wants to Admit</h3><p>Here’s the uncomfortable truth. The skills that make you good at passing a coding interview and the skills that make you good at your actual job have surprisingly little overlap.</p><p>Day to day, a software engineer reads existing code, collaborates with teammates, debugs production issues, reviews pull requests, designs systems, and writes code that other people will maintain. None of that is tested in a 45-minute LeetCode session.</p><p>What gets tested instead? Your ability to recall a specific algorithm under time pressure, write syntactically perfect code without an IDE, and explain your thought process while simultaneously solving a puzzle. That’s a performance, not an evaluation.</p><p>Google’s own internal research found that interview scores were essentially useless at predicting on-the-job performance. They studied it, published the findings, and then… kept doing interviews the same way. If that doesn’t tell you something about how deeply entrenched this system is, nothing will.</p><figure><img alt="The overlap is smaller than you’d think." src="https://cdn-images-1.medium.com/max/1024/1*CyBfQ_DYfwS3vHmDK9Nikg.png" /><figcaption>The overlap is smaller than you’d think.</figcaption></figure><h3>The LeetCode Grind Is Burning People Out</h3><p>Let’s talk about what the current system actually does to candidates.</p><p>The expectation in 2025–2026 is that before you even apply, you should have ground through hundreds of LeetCode problems. There are entire communities, courses, and paid platforms built around this. People spend months preparing for interviews instead of, you know, building things.</p><p>I’ve seen senior engineers with a decade of experience spend their evenings and weekends drilling dynamic programming problems. Not because they enjoy it, and not because it makes them better engineers. Because it’s the toll you pay to switch jobs.</p><p>The result is a system that selects for people who have time and energy to grind. Young, single, no kids, no burnout yet. If you’re a parent, or you’re dealing with health issues, or you’re just exhausted from your current job — tough luck. The interview process punishes you for having a life outside of coding.</p><p>And the irony is that many of the best engineers I’ve worked with would struggle in these interviews. They’re great at thinking through complex systems over days, not at performing algorithmic gymnastics in 45 minutes.</p><figure><img alt="The nightly ritual of millions of developers who just want to switch jobs." src="https://cdn-images-1.medium.com/max/1024/1*PtufRZZoJymo0E3z7y5H5A.png" /><figcaption>The nightly ritual of millions of developers who just want to switch jobs.</figcaption></figure><h3>Why Companies Keep Doing It Anyway</h3><p>If the current system is so broken, why does it persist? A few reasons.</p><p><strong>It’s easy to standardize.</strong> LeetCode-style problems have clear right and wrong answers. You can grade them. You can compare candidates. You can build rubrics. For companies hiring at scale — the FAANGs and their imitators — this matters more than whether the assessment is actually meaningful.</p><p><strong>Nobody wants to be the one who changed it.</strong> If you’re a hiring manager and you switch to a new format, and a bad hire slips through, you’re on the hook. But if you use the standard LeetCode process and a bad hire happens, well, that’s just how it goes. The current system provides cover. It’s defensive hiring.</p><p><strong>Interviewers don’t know what else to do.</strong> Most engineers who conduct interviews were never trained to do so. They pull a problem from a list, watch the candidate struggle, and try to evaluate “problem-solving ability” with no clear criteria for what that means. The LeetCode format feels structured, so they stick with it.</p><p><strong>Candidates tolerate it.</strong> This is the part nobody likes to hear. We complain about the process, but we still prep for it. We still grind. We still show up. As long as candidates keep playing the game, companies have no incentive to change the rules.</p><figure><img alt="The self-reinforcing cycle that keeps the system alive." src="https://cdn-images-1.medium.com/max/1024/1*69rlTOiMdGl0wD8-V3Ii5A.png" /><figcaption>The self-reinforcing cycle that keeps the system alive.</figcaption></figure><h3>Comparing the Alternatives</h3><p>Not every company interviews the same way. Let’s look at the most common formats and where they fall short.</p><h4>Live Coding (LeetCode Style)</h4><p>The standard. You get a problem, you solve it live, the interviewer watches.</p><p><strong>What it tests:</strong> Algorithm recall, performance under pressure, ability to think out loud.</p><p><strong>What it misses:</strong> Literally everything about day-to-day software engineering. You don’t code from scratch in production. You don’t solve isolated puzzles. You don’t work without Google, documentation, or your IDE’s autocomplete.</p><p><strong>The real problem:</strong> It rewards memorization over understanding. Someone who has seen the exact problem before will outperform someone who’s a better engineer but hasn’t memorized that specific pattern. That’s not signal, that’s noise.</p><h4>Take-Home Assignments</h4><p>The candidate gets a project (build an API, create a small app) and has a few days to complete it.</p><p><strong>What it tests:</strong> Ability to write real code, make architectural decisions, handle edge cases, write tests.</p><p><strong>What it misses: </strong>It doesn’t test collaboration or communication. And it’s hard to know how much help the candidate got.</p><p><strong>The real problem:</strong> It’s a massive time ask. A “small” take-home can easily eat 4–8 hours. If you’re interviewing at three companies simultaneously, that’s a part-time job on top of your actual job. And many companies never even review the submission properly — they skim it in 10 minutes. The disrespect of candidate time is the biggest issue here.</p><h4>System Design</h4><p>The candidate is asked to design a system (URL shortener, chat application, distributed cache) at a high level.</p><p><strong>What it tests:</strong> Architectural thinking, understanding of distributed systems, ability to reason about trade-offs.</p><p><strong>What it misses:</strong> It heavily favors experienced candidates (which might be fine for senior roles). It’s also hard to standardize — two interviewers can evaluate the same session very differently.</p><p><strong>The real problem:</strong> It’s often conducted poorly. Many interviewers have a specific solution in mind and penalize candidates who go a different direction, even if the alternative is perfectly valid. It becomes “guess what I’m thinking” instead of “show me how you think.”</p><h4>Pair Programming</h4><p>The candidate works on a real-ish problem alongside an engineer from the team, often using a real IDE with access to documentation.</p><p><strong>What it tests:</strong> Collaboration, communication, how the candidate approaches a problem in a realistic environment.</p><p><strong>What it misses:</strong> It’s time-intensive for the interviewing team. And shy or introverted candidates can underperform even if they’re excellent solo contributors.</p><p><strong>The real problem:</strong> Very few companies do this well. It requires trained interviewers and carefully designed problems. Most companies that claim to do pair programming actually do live coding with an audience.</p><figure><img alt="Every format has trade-offs. The question is which trade-offs you’re willing to accept." src="https://cdn-images-1.medium.com/max/1024/1*F98w0ujg5IoOjCFjlGPdBQ.png" /><figcaption>Every format has trade-offs. The question is which trade-offs you’re willing to accept.</figcaption></figure><h3>Tools Like Ezzi Are a Symptom</h3><p>I built <a href="https://github.com/GetEzzi/ezzi-app">Ezzi</a> — an invisible overlay that provides AI-powered assistance during coding interviews. I’ve written about the technical journey and the engineering behind it in previous articles. But here’s what I want to say in this context: <a href="https://github.com/GetEzzi/ezzi-app">Ezzi</a> exists because the interview system created a market for it.</p><p>When the gap between what’s tested and what’s needed is this wide, people find ways to bridge it. Some memorize 500 LeetCode problems. Some hire interview coaches. Some use AI tools. The methods differ, but the motivation is the same: the process feels unfair, and people are looking for an edge.</p><p>I’m not going to moralize about whether using an AI tool during an interview is right or wrong. That’s a separate conversation (and one I’ll tackle in a future article). What I will say is this: if your interview process can be defeated by an AI reading the question and generating a solution, maybe the interview process is testing the wrong thing.</p><p>A good assessment should be hard to game not because it’s obscure, but because it genuinely measures what matters. If a candidate can have an AI solve the problem and still can’t do the job, that’s a bad hire. But if a candidate uses an AI tool and then performs well on the job — what exactly did your interview measure that mattered?</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*85_-YEHvzVD6rfGGg3qCTw.png" /><figcaption>The continuum of interview preparation. Where do you draw the line — and does the line even matter?</figcaption></figure><h3>What a Better Process Could Look Like</h3><p>I don’t think there’s a single perfect interview format. But I do think the industry can do a lot better than what we have now. Here’s what I’d want to see.</p><p><strong>Paid trial days.</strong> Have the candidate work on a real (or realistic) task with the actual team for a day or two. Pay them for their time. This is expensive, yes. But it gives you more signal in a day than a dozen whiteboard sessions. Some companies already do this and swear by it.</p><p><strong>Structured behavioral interviews.</strong> Not the “tell me about a time you showed leadership” fluff. Real, calibrated questions about technical decisions the candidate has made, how they handled disagreements, how they debugged a production issue. Research consistently shows that structured behavioral interviews are among the best predictors of job performance.</p><p><strong>Portfolio and past work review.</strong> If a candidate has open-source contributions, a blog, side projects, or can walk you through a system they built at a previous job — that’s real evidence. It’s not perfect (not everyone has time for side projects), but it’s better than watching them sweat over a binary tree.</p><p><strong>Realistic coding exercises with tools.</strong> If you insist on a coding component, let them use their IDE. Let them Google things. Let them use AI tools, even. Then evaluate the result: is the code clean, tested, well-structured? Can they explain their decisions? That’s closer to what the job actually looks like.</p><p><strong>Shorter, more focused interviews.</strong> A five-round, full-day interview is an endurance test. It measures how well you perform while exhausted. Consider whether you really need five rounds, or if two well-designed sessions would give you the same signal.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*BX3giCFgYjVWrR-xRYi2Yw.png" /><figcaption>Less isn’t just better for candidates — it’s better for companies too.</figcaption></figure><h3>The Cost of Doing Nothing</h3><p>The status quo has real costs, and not just for candidates.</p><p>Companies miss great engineers who don’t interview well. They hire people who are excellent at interviews but mediocre at the actual work. They lose candidates to competitors who have a better process. And they contribute to an industry culture where preparing for interviews is a second job.</p><p>The candidates who are most hurt by the current system are often the ones companies claim to want: experienced engineers, career changers, people from non-traditional backgrounds, people with responsibilities outside of work. The grind favors the privileged.</p><p>If you’re a hiring manager reading this, I’d challenge you to ask one question: does your interview process test for what actually matters in the role? If the honest answer is “not really,” you have the power to change it. You don’t have to overhaul everything at once. Start with one round. Replace a LeetCode problem with a take-home review or a pair programming session. See what happens.</p><p>And if you’re a candidate going through this right now — I get it. The system is frustrating. Prepare as best you can, but don’t let a bad interview outcome define your self-worth. The interview is broken, not you.</p><h3>Final Thoughts</h3><p>I started thinking about this topic long before I built <a href="https://github.com/GetEzzi/ezzi-app">Ezzi</a>. The frustration of going through hiring processes that felt disconnected from real work is something almost every developer I know shares. Building <a href="https://github.com/GetEzzi/ezzi-app">Ezzi</a> was one response to that frustration. Writing this article is another.</p><p>The tech industry prides itself on innovation and disruption. We’ll reinvent everything from how people order food to how companies deploy code. But we’ve been interviewing engineers the same way for over a decade. That’s not tradition — that’s inertia.</p><p>Something needs to change. I don’t have all the answers, but I know the current system isn’t one of them.</p><p>I hope this was helpful. Good luck, and happy engineering!</p><p>If you want to check out the project that came from this frustration, Ezzi is open source on GitHub: <a href="https://github.com/GetEzzi/ezzi-app">github.com/GetEzzi/ezzi-app</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8cfcf8f64b15" width="1" height="1" alt=""><hr><p><a href="https://levelup.gitconnected.com/why-technical-interviews-are-broken-and-what-we-can-do-about-it-8cfcf8f64b15">Why Technical Interviews Are Broken (And What We Can Do About It)</a> was originally published in <a href="https://levelup.gitconnected.com">Level Up Coding</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[What Vibe Coding Taught Me About Maintaining Someone Else’s AI-Generated Code]]></title>
            <link>https://levelup.gitconnected.com/what-vibe-coding-taught-me-about-maintaining-someone-elses-ai-generated-code-73ab88cf8208?source=rss-eb0dbd32d4c4------2</link>
            <guid isPermaLink="false">https://medium.com/p/73ab88cf8208</guid>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[typescript]]></category>
            <category><![CDATA[vibe-coding]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[javascript]]></category>
            <dc:creator><![CDATA[Dmitry Khorev]]></dc:creator>
            <pubDate>Mon, 16 Feb 2026 19:40:34 GMT</pubDate>
            <atom:updated>2026-02-16T19:40:34.460Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/699/1*mKF9WCJsLjBED7hEDiRerg.png" /></figure><p><em>Real examples from an open-source Electron app that went viral — and what I learned refactoring it.</em></p><p><strong>TL;DR — </strong>I forked an open-source Electron project called Interview Coder to build my own app, <a href="https://getezzi.com/">Ezzi</a>. The codebase was clearly vibe coded — functional but fragile. I found security holes, any types everywhere, 700-line monolithic files, and client-side credit management that should have been on the server. This article walks through real code examples of what I found and the patterns you should watch for when inheriting AI-generated code.</p><h3>The Rise of Vibe Coding</h3><p>“Vibe coding” is the new hot thing. You open Cursor or Claude, describe what you want, and let the AI write most of the code. Ship fast, iterate faster. The results can be impressive — entire apps built in a weekend.</p><p>But here’s what nobody talks about: what happens when someone else has to maintain that code?</p><p>I found out the hard way. In early 2025, I forked a project called Interview Coder — an Electron app that had blown up online. The concept was brilliant (an invisible overlay for coding interviews), and it worked. The creator had open-sourced it under MIT license. Perfect starting point for my own project, <a href="https://github.com/GetEzzi/ezzi-app">Ezzi</a>.</p><p>Except the codebase was a mess.</p><p>Not broken. Not non-functional. It ran. It did what it promised. But the internals told a different story — one that I think is becoming increasingly common as more projects get built with AI assistance and less human oversight.</p><h3>What I Actually Found</h3><p>Let me walk you through real examples. These are not hypothetical “bad code smells” from a textbook. This is actual code from a project with thousands of GitHub stars.</p><h4>1. The “any” Epidemic</h4><p>The first thing I noticed was TypeScript being used as if it were JavaScript with extra steps. The type system was barely utilized.</p><p>Here’s the global type definitions file in its entirety:</p><pre>// src/types/global.d.ts<br>interface Window {<br> __IS_INITIALIZED__: boolean<br> __CREDITS__: number<br> __LANGUAGE__: string<br> __AUTH_TOKEN__: string | null<br> supabase: any // Replace with proper Supabase client type if needed<br> electron: any // Replace with proper Electron type if needed<br> electronAPI: any // Replace with proper Electron API type if needed<br>}</pre><p>Three properties typed as any with comments that literally say “Replace with proper type if needed.” That’s AI placeholder text that was never cleaned up. The AI generated a TODO and the developer shipped it.</p><p>The pattern continues in the Electron API definitions:</p><pre>// src/types/electron.d.ts<br>onDebugSuccess: (callback: (data: any) =&gt; void) =&gt; () =&gt; void<br>onProblemExtracted: (callback: (data: any) =&gt; void) =&gt; () =&gt; void<br>onSolutionSuccess: (callback: (data: any) =&gt; void) =&gt; () =&gt; void<br>onUpdateAvailable: (callback: (info: any) =&gt; void) =&gt; () =&gt; void<br>onUpdateDownloaded: (callback: (info: any) =&gt; void) =&gt; () =&gt; void</pre><p>Every single callback parameter is any. The data flowing through the entire IPC layer — the communication bridge between the Electron main process and the renderer — has zero type safety. You could send anything and TypeScript would shrug.</p><p>And the core state management in the main process:</p><pre>// electron/main.ts<br>function getProblemInfo(): any {<br> return state.problemInfo<br>}<br>function setProblemInfo(problemInfo: any): void {<br> state.problemInfo = problemInfo<br>}</pre><p>The state object itself had problemInfo: null as any as its definition. This is the central piece of data the app revolves around — the extracted problem from screenshots — and it has no type definition. Any code that reads this data is flying blind.</p><p><strong>The lesson:</strong> When AI writes TypeScript, it often defaults to any as a shortcut. It works, it compiles, and the AI moves on to the next function. But it defeats the entire purpose of using TypeScript. If you’re inheriting a codebase like this, search for any first. The count will tell you a lot about how much the original developer reviewed the AI’s output.</p><h4>2. Security That Made Me Uncomfortable</h4><p>This was the part that genuinely alarmed me.</p><p>The app stored the auth token in a global window object:</p><pre>// src/App.tsx<br>if (data?.session?.access_token) {<br> window.__AUTH_TOKEN__ = data.session.access_token<br>}</pre><p>And then the Electron main process would retrieve it by executing JavaScript in the renderer:</p><pre>// electron/ProcessingHelper.ts<br>private async getAuthToken(): Promise&lt;string | null&gt; {<br> const mainWindow = this.deps.getMainWindow()<br> try {<br>   await this.waitForInitialization(mainWindow)<br>   const token = await mainWindow.webContents.executeJavaScript(<br>     &quot;window.__AUTH_TOKEN__&quot;<br>   )<br> return token<br> } catch (error) {<br>   console.error(&quot;Error getting auth token:&quot;, error)<br>   return null<br> }<br>}</pre><p>executeJavaScript(“window.__AUTH_TOKEN__”) from the main process. This is the Electron equivalent of reaching into someone’s pocket. It works, but it bypasses the entire security model that Electron’s contextBridge was designed to enforce. The proper approach is IPC messaging through the preload script.</p><p>And then there was the credit management. Subscription credits were subtracted on the client:</p><pre>// src/App.tsx<br>const { data: updatedSubscription, error } = await supabase<br> .from(&quot;subscriptions&quot;)<br> .update({ credits: currentSubscription.credits - 1 })<br> .eq(&quot;user_id&quot;, user.id)<br> .select(&quot;credits&quot;)<br> .single()</pre><p>The client fetches the current credit count, subtracts one, and writes it back. Without Row Level Security properly configured, anyone could modify the query to set their credits to whatever they wanted. Or skip the subtraction entirely. This is business logic that belongs on a server, behind authentication, with proper validation.</p><p><strong>The lesson: </strong>AI tools don’t think about security architecture. They produce code that works for the happy path. When you see Supabase queries modifying sensitive data directly from the client, or auth tokens passed between processes through globals instead of proper IPC — these are not style issues. These are design flaws that undermine your entire trust model.</p><h4>3. Monolithic Files</h4><p>AI tends to keep adding code to whatever file it’s currently working in. The result is a handful of massive files that do everything.</p><p>ProcessingHelper.ts — 700+ lines. One class handling: credential retrieval, language detection, auth token management, screenshot processing, solution generation, debug processing, request cancellation, and error handling for all of the above.</p><p>App.tsx — 700+ lines. The root React component that manages: authentication forms, subscription state, credit management, Supabase real-time subscriptions, toast notifications, language preferences, and initialization logic.</p><p>main.ts — 660+ lines. The Electron main process containing: application state, window management, helper initialization, auth callback handling, window movement calculations, screenshot delegation, IPC setup, and environment variable loading.</p><p>Three files, over 2,000 lines, doing virtually everything the app does.</p><p>The ProcessingHelper class is a good example of the problem. Here’s its credit checking method:</p><pre>// electron/ProcessingHelper.ts<br>private async getCredits(): Promise&lt;number&gt; {<br>  const mainWindow = this.deps.getMainWindow()<br>  if (!mainWindow) return 0<br><br>  try {<br>    await this.waitForInitialization(mainWindow)<br>    const credits = await mainWindow.webContents.executeJavaScript(<br>      &quot;window.__CREDITS__&quot;<br>    )<br><br>    if (<br>      typeof credits !== &quot;number&quot; ||<br>      credits === undefined ||<br>      credits === null<br>    ) {<br>      console.warn(&quot;Credits not properly initialized&quot;)<br>      return 0<br>    }<br><br>    return credits<br>  } catch (error) {<br>    console.error(&quot;Error getting credits:&quot;, error)<br>    return 0<br>  }<br>}</pre><p>Look at that type check: typeof credits !== “number” || credits === undefined || credits === null. If typeof credits !== “number” is true, then checking for undefined or null is redundant. This reads like an AI pattern-matching from multiple examples without understanding the logic. It’s defensive coding that defends against nothing.</p><p>The same file has identical patterns for getLanguage() and getAuthToken() — each method polling the renderer’s global state through executeJavaScript. Three methods, same structure, same problems, no abstraction.</p><p><strong>The lesson:</strong> If your inherited codebase has a few files that are 500+ lines each, that’s a strong signal of AI-generated code that was never decomposed. AI doesn’t refactor proactively. It just keeps adding functions to the current file until you tell it to stop.</p><h4>4. The “window.__GLOBAL__” Pattern</h4><p>Instead of using React Context, a state management library, or even Electron’s IPC properly, the app synchronized state through global window variables:</p><pre>// src/App.tsx - setting global state<br>const updateCredits = useCallback((newCredits: number) =&gt; {<br>  setCredits(newCredits)          // React state<br>  window.__CREDITS__ = newCredits // also global state<br>}, [])<br><br>const updateLanguage = useCallback((newLanguage: string) =&gt; {<br>  setCurrentLanguage(newLanguage)   // React state<br>  window.__LANGUAGE__ = newLanguage // also global state<br>}, [])<br><br>const markInitialized = useCallback(() =&gt; {<br>  setIsInitialized(true)            // React state<br>  window.__IS_INITIALIZED__ = true  // also global state<br>}, [])</pre><p>Every state update writes to two places: React’s state and a global window property. Two sources of truth. If they ever get out of sync — and they will — good luck debugging which one is correct.</p><p>The reason for the duplication is that the Electron main process needed access to these values. But instead of using IPC (which Electron is designed for), the main process reaches into the renderer via executeJavaScript:</p><pre>// electron/ProcessingHelper.ts<br>const isInitialized = await mainWindow.webContents.executeJavaScript(<br>  &quot;window.__IS_INITIALIZED__&quot;<br>)<br>const credits = await mainWindow.webContents.executeJavaScript(<br>  &quot;window.__CREDITS__&quot;<br>)</pre><p>This creates a tight coupling between the main process and the renderer’s implementation details. Change how React manages state, and the main process breaks. Rename a window variable, and nothing will tell you at compile time.</p><p><strong>The lesson:</strong> AI tools often reach for the simplest working solution. Global state is simpler than proper IPC. It works in the demo, it passes a basic test, and the AI moves on. Watch for window.__anything__ patterns — they’re a sign that cross-process communication was hacked together rather than designed.</p><h4>5. Duplicated Code Everywhere</h4><p>AI tools don’t maintain awareness of what already exists in the codebase. If you ask for similar functionality in a different component, you get a fresh copy.</p><p>The sign-out logic was duplicated across two components, character for character:</p><pre>// src/components/Solutions/SolutionCommands.tsx<br>const handleSignOut = async () =&gt; {<br>  try {<br>    localStorage.clear()<br>    sessionStorage.clear()<br>    const { error } = await supabase.auth.signOut()<br>    if (error) throw error<br>  } catch (err) {<br>    console.error(&quot;Error signing out:&quot;, err)<br>  }<br>}<br><br>// src/components/Queue/QueueCommands.tsx - identical copy<br>const handleSignOut = async () =&gt; {<br>  try {<br>    localStorage.clear()<br>    sessionStorage.clear()<br>    const { error } = await supabase.auth.signOut()<br>    if (error) throw error<br>  } catch (err) {<br>    console.error(&quot;Error signing out:&quot;, err)<br>  }<br>}</pre><p>Same function, same logic, two files. If you need to change the sign-out behavior (and I did — I stripped out Supabase from the client entirely), you have to find and update every copy.</p><p>The same pattern repeated with screenshot mapping logic, tooltip visibility handling, and error display components. Each duplicated 2–3 times across the codebase.</p><p><strong>The lesson:</strong> After inheriting AI-generated code, search for duplicated blocks. The AI doesn’t know (or care) that it already wrote the same function elsewhere. Deduplication is one of the first refactoring passes you should do.</p><h4>6. Error Handling as an Afterthought</h4><p>The error handling throughout the codebase followed a consistent pattern: catch the error, log it, move on.</p><pre>// src/App.tsx<br>checkExistingSession()  // async function called without await, errors lost</pre><p>This is a fire-and-forget call to an async function. If it throws, nobody catches it. The user won’t see an error, and the app might be in a half-initialized state.</p><p>The ProcessingHelper had nested try-catch blocks where errors were typed as any:</p><pre>// electron/ProcessingHelper.ts<br>} catch (error: any) {<br>  mainWindow.webContents.send(<br>    this.deps.PROCESSING_EVENTS.INITIAL_SOLUTION_ERROR,<br>    error  // sending the raw error object through IPC<br>  )<br>  console.error(&quot;Processing error:&quot;, error)<br>  if (axios.isCancel(error)) {<br>    mainWindow.webContents.send(<br>      this.deps.PROCESSING_EVENTS.INITIAL_SOLUTION_ERROR,<br>      &quot;Processing was canceled by the user.&quot;<br>    )<br>  } else {<br>    mainWindow.webContents.send(<br>      this.deps.PROCESSING_EVENTS.INITIAL_SOLUTION_ERROR,<br>      error.message || &quot;Server error. Please try again.&quot;<br>    )<br>  }<br>}</pre><p>Three things wrong here. First, the raw error object is sent through IPC before the type check — so the renderer gets an unserialized error object. Then, depending on whether it’s a cancellation or not, a second message is sent with a proper string. The renderer now gets two error events for one failure. Second, error: any means no type narrowing. Third, the error message fallback assumes error.message exists, which isn’t guaranteed for any.</p><p>And then there was a typo in the event constants that could cause silent failures:</p><pre>// electron/preload.ts<br>UNAUTHORIZED: &quot;procesing-unauthorized&quot;,  // &quot;procesing&quot; - missing a &#39;s&#39;</pre><p>While main.ts had:</p><pre>// electron/main.ts<br>UNAUTHORIZED: &quot;processing-unauthorized&quot;,  // correct spelling</pre><p>Different strings for the same event. The preload listens for “procesing-unauthorized” but main sends “processing-unauthorized”. The unauthorized handler would never fire.</p><p><strong>The lesson:</strong> AI-generated error handling often looks correct at a glance but falls apart under scrutiny. Check for: unhandled promise rejections, catch (error: any) blocks, inconsistent error event naming, and duplicate error sends.</p><h3>What I Did About It</h3><p>When I started building <a href="https://github.com/GetEzzi/ezzi-app">Ezzi</a> from this fork, my first month was almost entirely refactoring. Here’s the rough order of operations that worked for me:</p><p>1. <strong>Security audit first.</strong> I stripped out the client-side credit management, moved sensitive operations to the server, and replaced the window.__AUTH_TOKEN__ pattern with proper IPC. Security architecture issues are the ones that can actually hurt users.</p><p>2. <strong>Type safety pass.</strong> I searched for every any and replaced them with proper interfaces. This immediately surfaced several bugs where functions were receiving data in unexpected shapes.</p><p>3. <strong>Decompose monolithic files.</strong> I broke ProcessingHelper.ts into smaller, focused modules. Same with App.tsx — extracted the auth form, subscription management, and initialization logic into their own components.</p><p>4. <strong>Deduplicate.</strong> Extracted shared logic into utility functions and custom hooks. The sign-out logic became a single useSignOut hook.</p><p>5. <strong>Clean up error handling.</strong> Typed errors properly, removed duplicate error sends, fixed the event name typos, and made sure async functions were properly awaited.</p><p>It wasn’t glamorous work. But it was necessary. The app worked before I started, and it worked after. The difference is that now I can actually maintain it, extend it, and trust it.</p><h3>A Checklist for Inheriting AI-Generated Code</h3><p>If you’re about to fork or take over a vibe coded project, here’s what I’d check first:</p><p><strong>Security</strong><br>- Search for hardcoded API keys, URLs, and secrets<br>- Look for client-side operations that should be server-side (credit management, billing)<br>- Check how auth tokens are passed between processes<br>- Verify that sensitive business logic lives on the server, not in the client</p><p><strong>Type Safety</strong><br>- Count the any types — if there are more than a handful, expect problems<br>- Check if interfaces exist for the core data structures<br>- Look for AI placeholder comments like “Replace with proper type”</p><p><strong>Architecture</strong><br>- Check file sizes — files over 500 lines likely need decomposition<br>- Look for the window.__SOMETHING__ pattern<br>- Search for duplicated code blocks across components<br>- Verify that cross-process communication uses proper channels</p><p><strong>Error Handling</strong><br>- Search for catch (error: any) blocks<br>- Look for fire-and-forget async calls (async functions called without await)<br>- Check for duplicate error event sends<br>- Verify error event naming is consistent across files</p><h3>The Bigger Picture</h3><p>I’m not against vibe coding. I used AI assistants extensively while building <a href="https://github.com/GetEzzi/ezzi-app">Ezzi</a> — Cursor, Junie, Claude Code. They’re genuinely powerful tools that let you move fast.</p><p>But there’s a difference between using AI as a collaborator and using it as a replacement for understanding your own code. The original Interview Coder project shipped fast and went viral. That’s a success by many metrics. But the code underneath was a liability waiting to happen.</p><p>The AI doesn’t care about maintainability. It doesn’t think about who will read this code next month. It optimizes for “does it work right now” and moves on. That’s fine for prototypes and hackathons. It’s not fine for software that handles auth tokens and financial transactions.</p><p>If you’re vibe coding your own project, at least do a review pass before shipping. And if you’re inheriting someone else’s vibe coded project, budget serious time for refactoring before you start building on top of it.</p><p>Git is your friend. Commit often. And read the code the AI writes.</p><p>I hope this was helpful. Good luck, and happy engineering!</p><p>If you’re curious about the result of all this refactoring, Ezzi is open source on GitHub: <a href="https://github.com/GetEzzi/ezzi-app">github.com/GetEzzi/ezzi-app</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=73ab88cf8208" width="1" height="1" alt=""><hr><p><a href="https://levelup.gitconnected.com/what-vibe-coding-taught-me-about-maintaining-someone-elses-ai-generated-code-73ab88cf8208">What Vibe Coding Taught Me About Maintaining Someone Else’s AI-Generated Code</a> was originally published in <a href="https://levelup.gitconnected.com">Level Up Coding</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building Ezzi: My Journey Creating an Invisible Tech Interview Assistant (Now Open Source)]]></title>
            <link>https://levelup.gitconnected.com/building-ezzi-an-invisible-tech-interview-assistant-a1963a8fe0f3?source=rss-eb0dbd32d4c4------2</link>
            <guid isPermaLink="false">https://medium.com/p/a1963a8fe0f3</guid>
            <category><![CDATA[web-development]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[react]]></category>
            <dc:creator><![CDATA[Dmitry Khorev]]></dc:creator>
            <pubDate>Mon, 21 Jul 2025 03:14:41 GMT</pubDate>
            <atom:updated>2025-07-21T03:14:41.172Z</atom:updated>
            <content:encoded><![CDATA[<h4>From backend engineering to interview survival tools — reflections on shipping my first desktop AI app.</h4><p><strong>TL;DR — </strong>I built a desktop app called <a href="https://github.com/GetEzzi/ezzi-app"><strong>Ezzi</strong></a> (pronounced like “easy”) that acts as an <strong>invisible overlay</strong> during coding interviews. It gives you <strong>AI-generated solutions</strong> (even in your native language) without the interviewer noticing. I forked an early version of Interview Coder to my create my own spin on the idea. I’m excited to <strong>open-source</strong> Ezzi so others can use it and contribute to its future.</p><h3>The Problem with Interviews</h3><p>Many developers get burnt out by the <strong>broken hiring process</strong>. Technical job searches drag on with endless coding tests and whiteboard sessions, which can chip away at your <strong>confidence</strong>.</p><p>I often felt that acing these interviews was as much about endurance and luck. Even after countless evenings on LeetCode, we’d still get blindsided by questions under pressure. It struck me that the process is, frankly, broken - and I wasn’t alone in feeling this. I started wondering: What if I had a “coach” by my side during these interviews? Not a person, but an AI assistant that could hint or even show a solution if I got stuck.</p><p>That’s when I discovered Interview Coder, an open source project that had blown up online in April 2025. It was vibe coded and honestly a mess, but it worked. The concept was brilliant: an invisible overlay that only the candidate can see (not the interviewer). However, the project soon went closed source and development stagnated. I decided to fork one of the earlier open source versions and build my own take on the idea.</p><p>This concept was proof of how flawed the system is: if people are resorting to stealth tools to get through interviews, maybe the interviews need to change. But until they do, why not level the playing field a bit?</p><h3>Building Ezzi</h3><p>I set out to build a desktop app that could help with <strong>live interview support</strong>: If you choose, use it during a real interview (even while screen sharing) for <strong>undetectable assistance -</strong> a little safety net that only you can see.</p><p>The current version focuses on the core interview assistance functionality, with additional features like <strong>hint-only mode</strong>, LeetCode practice, and learning tools planned for the future.</p><p>I also had some personal motivations: after years as a backend engineer, I was itching to build a desktop application and learn something new. I’d never built an Electron app or done cross-platform UI work before. Plus, I was curious about the new wave of AI coding tools like Cursor, Claude, Codex, Junie. This project felt like the perfect sandbox to play with those while creating something potentially useful (or at least interesting!).</p><h3>What is Ezzi?</h3><p>In a nutshell, Ezzi (pronounced like “easy”) is an <strong>invisible overlay UI</strong> for coding interviews. It runs as a desktop app on your computer, sitting on top of your coding environment. When activated (via a <strong>hotkey</strong>), it shows an assistive panel <strong>only on your screen</strong>, not on the shared screen. The interviewer sees nothing unusual - just your coding window - while you see an extra panel with helpful info.</p><figure><img alt="You typical screen share while doing an online DSA interview — that’s what interviewer would see." src="https://cdn-images-1.medium.com/max/1024/1*81yvX0cpwqCht733_ogWHw.png" /><figcaption>You typical screen share while doing an online DSA interview — that’s what interviewer would see.</figcaption></figure><p>Under the hood, Ezzi uses AI (Claude 4 for now) to analyze the coding question you’re working on. It can give you a step-by-step thought process, explain which DSA pattern or algorithm might apply, and generate a complete solution.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*PxTdB1L82XLpCVThshpCGA.png" /><figcaption>That’s what you actually see with Ezzi.</figcaption></figure><h4>Key Features</h4><p><strong>Invisible UI</strong>: The overlay window is designed to be 99% invisible to screen-sharing tools.</p><p>It won’t show up on Zoom, Google Meet, or common interview platforms (they’ll just capture your code editor, not the Ezzi window). This “stealth mode” means you can use it in live interviews without the tool itself being detected.</p><p><strong>AI-Generated Solutions</strong>: Ezzi connects to AI models to provide context-aware help.</p><p><strong>Multi-language Support</strong>: Delivers thoughts and hints in your preferred spoken language to reduce cognitive load during interviews.</p><p><strong>Minimal Footprint &amp; Undetectability</strong>: It doesn’t inject anything into the browser or interfere with the code editor; it’s a separate floating window. You toggle it with a hotkey, and it can overlay on top of any browser, IDE or coding pad. When off, it’s completely out of the way. Hotkeys look like a no-op to your coding environment.</p><figure><img alt="Demo of Ezzi in action." src="https://cdn-images-1.medium.com/max/800/1*2Abu3w4dZev0sh8jM8xZRQ.gif" /><figcaption>Demo of Ezzi in action.</figcaption></figure><p>In short, Ezzi turns those nerve-racking DSA interviews into a (slightly) more open-book experience. It’s like having an expert whispering hints in your ear, or a safety net ready to catch you if you get stuck. And since Ezzi is open source, you can build and customize this assistant yourself, or even run your own AI backend for it.</p><h3>Building a Desktop App as a Backend Engineer</h3><p>Building Ezzi was as much a personal learning journey as it was a software project. I spend most of my days in backend land - databases, APIs, cloud services - not building UIs. So, choosing Electron + React for a desktop app was me stepping out of my comfort zone.</p><p>Electron allowed me to write cross-platform desktop software in TypeScript (essentially creating a mini web app that runs on Windows, Mac, or Linux). With this, I could ensure anyone can use Ezzi regardless of their OS.</p><h4><strong>Tech Stack at a Glance</strong></h4><p><strong>Electron &amp; React:</strong> Electron provides the container (Chromium + Node runtime) to create the desktop app, and React + Tailwind CSS makes it easier to build a dynamic UI. The overlay’s interface (the hints panel, etc.) is just a web page rendered in a transparent Electron window.</p><p><strong>Node.js backend:</strong> I built a companion backend service with NestJS. This handles things like processing the screenshots, communicating with the AI models, and any heavy logic that I didn’t want to run in the client for security. NestJS gives a structured way to build out a REST API. For example, when you hit “Solve” in Ezzi, the app sends the data to a NestJS API which then calls an AI service to retrieve hints or solutions, and sends it back to the app.</p><p><strong>Infrastructure: </strong>the rest of the iceberg is here - databases, GitHub actions, deploy pipelines, Stripe integration, AI integration and so on.</p><figure><img alt="High-level overview of Ezzi’s architecture." src="https://cdn-images-1.medium.com/max/1024/1*rtKAN55NFm_rkkZg2qI_DQ.png" /><figcaption>High-level overview of Ezzi’s architecture.</figcaption></figure><h4>How It Works</h4><p>The Electron + React frontend communicates with a backend, which interacts with external AI APIs (like OpenAI’s GPT or Anthropic’s Claude).</p><p>In self-hosted mode, the user runs his own backend server; in cloud mode, Ezzi’s app talks to a managed server.</p><p>I made the overlay window click-through in solution mode by default. This way, you’re not accidentally selecting text on the overlay when you mean to code. Clicks in the app area are never attributed to the Ezzi window, so your browser or IDE never loses focus. And when hidden, the app is truly hidden (not just minimized - it doesn’t appear on the taskbar or Dock).</p><h3>Development Journey (Forking, Fixing, and Learning)</h3><p>Ezzi’s development began as a fork of the Interview Coder codebase. Interview Coder, for context, was a closed-source (initially) app that went viral in late 2024 for offering undetectable AI help in interviews. Its creator eventually put the code on GitHub (with an open-source license) and even encouraged folks to fork it.</p><p>However, diving into that code was quite an eye-opener. The project was functional, but the internals were… let’s say rough around the edges. It felt like large parts had been generated by an AI (the original dev likely used Cursor or something similar). There were huge files with little structure, minimal comments, and some very quirky code paths. My first challenge was simply getting the code to run properly and understanding how everything connected.</p><p>The lack of clear structure meant I had to do a lot of refactoring from day one. Functions had names that didn’t always make sense, there were unused variables and dead code branches, and TypeScript types were applied loosely (with plenty of any types floating around). I spent a good amount of time enforcing type safety - enabling stricter TypeScript config, adding interfaces for data objects (like the shape of a captured “problem” or a “solution” response), and removing redundant code. This process flushed out many hidden bugs.</p><pre>// &quot;any&quot; typehints everywhere<br>function getProblemInfo(): any { // !!!<br>  return state.problemInfo<br>}<br><br>function setProblemInfo(problemInfo: any): void { // !!!<br>  state.problemInfo = problemInfo<br>}<br><br>// or no types at all<br>app.on(&quot;open-url&quot;, (event, url) =&gt; { // !!!<br>  console.log(&quot;open-url event received:&quot;, url)<br>  event.preventDefault()<br>  if (url.startsWith(&quot;interview-coder://&quot;)) {<br>    handleAuthCallback(url, state.mainWindow)<br>  }<br>})<br><br>// auth token is stored in the window object of the browser<br>const checkExistingSession = async () =&gt; {<br>      const { data: { session } } = await supabase.auth.getSession()<br>      if (session?.access_token) {<br>        window.__AUTH_TOKEN__ = session.access_token // !!!<br>        setUser(session.user)<br>        setLoading(false)<br>      }<br>    }</pre><p>I also took the opportunity to try out AI coding assistants during development. I alternated between using Cursor, JetBrains Junie and Claude Code as I untangled the codebase and built new features (“vibe coding” kinda but with prior software engineering experience).</p><p>However, it wasn’t all magic - the AI sometimes produced messy or buggy code. It’s a bittersweet feeling when an AI speeds you to a solution that almost works, and then you dig for some time fixing edge cases. This was a big part of the development journey: embracing these tools for speed, but then stabilizing and refactoring the code to ensure Ezzi runs reliably.</p><p>Git is your friend.</p><h4><strong>Security and Privacy Enhancements</strong></h4><p>One alarming discovery was that the original app used Supabase<em> </em>endpoints without row security and hard-coded secrets that were never meant for production. It was possible to dump user data without proper auth. To get solutions without subscription or to generate subscription for yourself all without any authentication.</p><pre>// App.tsx - subtracting user credits on the client<br>const { data: updatedSubscription, error } = await supabase<br>    .from(&quot;subscriptions&quot;)<br>    .update({ credits: currentSubscription.credits - 1 })<br>    .eq(&quot;user_id&quot;, user.id)<br>    .select(&quot;credits&quot;)<br>    .single()</pre><pre>// App.tsx - checking user subscriptions<br>// removing &quot;.eq(&quot;user_id&quot;, user.id)&quot; allowed to fetch all the users<br>setSubscriptionLoading(true)<br>try {<br>  const { data: subscription } = await supabase<br>    .from(&quot;subscriptions&quot;)<br>    .select(&quot;*, credits, preferred_language&quot;)<br>    .eq(&quot;user_id&quot;, user.id)<br>    .maybeSingle()<br><br>  setIsSubscribed(!!subscription)<br>  setCredits(subscription?.credits ?? 0)<br>  if (subscription?.preferred_language) {<br>    setCurrentLanguage(subscription.preferred_language)<br>    window.__LANGUAGE__ = subscription.preferred_language<br>  }<br>} finally {<br>  setSubscriptionLoading(false)<br>}</pre><p>Knowing this, I prioritized security fixes: I stripped out any “must be backend” endpoints, secured the remaining ones with proper authentication, and moved all variables to .env files not tracked in git.</p><p>The bottom line is that no personal data or API keys are ever exposed in the repo or in network calls. This was a non-negotiable improvement for me after seeing what happened with the original.</p><p>I even considered the threat of the app itself being detected by anti-cheat measures, so I made it possible to change window title, process name, and other identifiable strings during build time.</p><h4><strong>Learning Desktop Packaging</strong></h4><p>As a backend/web guy, one of the steepest learning curves for me was packaging and distributing a desktop application. During development I was running Ezzi in dev mode with npm run dev command.</p><p>But to share it with others, I needed installers. I learned to use Electron Builder to create an installer for Windows (which packages the app and the Node runtime), a DMG for macOS, and even AppImage for Linux. Each had its quirks.</p><p>For Windows, I went through code signing - a process where you need a certificate to sign your executable so Windows will trust it.</p><p>For macOS, I navigated notarization (Apple’s security checks for distributed apps) and dealing with the screen recording permission pop-up (which is critical for the user to accept, otherwise the app can’t see the screen). These were all new concepts to me.</p><p>Seeing friends download the app and have it “just work” felt like magic - it’s easy to take for granted how much packaging work goes into software until you do it yourself!</p><blockquote>Right now I do not provide packaged versions, instead the user can download the code and build for their own OS, modifying app title and/or icon for extra invisibility. This eliminates the problem with any signing, you can run anything you build locally without worrying about code signing.</blockquote><h3>Self-Hosting vs Cloud: Choose Your Own Adventure</h3><p>One big decision I made was to keep Ezzi’s core functionality free and open-source, while offering a convenient <strong>managed cloud</strong> service as well. The entire frontend (Electron app) code is open for anyone to inspect, build, and deploy on their own. If you value privacy or just like to tinker, you can run Ezzi 100% <strong>self-hosted -</strong> no subscription needed, just your own API keys for the AI services. On the other hand, if you don’t want to deal with setting up servers or managing API quotas, you can opt to use the Ezzi Cloud (a hosted backend that I run). Both modes give you the same Ezzi app experience, but there are trade-offs.</p><h4>Self-Hosted Mode (DIY)</h4><p>You compile the open-source Ezzi app, and point it to your own backend. This could be a server you run at home or on AWS, or even your local machine. You’ll need to set up the backend (your favorite stack) and get API keys from AI provider.</p><p>The upside is it’s <strong>free</strong> (aside from any API costs you incur) and fully in your control. You can even modify the code - want to integrate a new model or change the UI? Go for it. The downside is the initial setup effort.</p><p>By my estimates, it takes around an hour to get everything configured and tested if you’re starting from scratch.</p><p>I’ve written detailed docs to guide you, but yes, self-hosting is for the more tech-savvy or motivated users.</p><h4>Ezzi Cloud (Managed Backend)</h4><p>This is the “plug and play” option. I run the backend for you in the cloud. All you do is build the Ezzi frontend, you sign in to the app, and it uses cloud servers. You get access to the premium AI model (Claude 4) without needing your own backend.</p><p>The cloud service is how I plan to cover server costs: it’s a paid subscription model, but you can just pay for a short period if you only need it for a couple of interviews. If that’s not your cup of tea, no worries - stick to self-hosted, and you’ll still have full access to all of Ezzi’s features.</p><p>Ultimately, the existence of the cloud option doesn’t change the open-source nature of Ezzi. The frontend is MIT-licensed. I want users to trust this tool, and part of that trust is knowing exactly what the code is doing.</p><h3>What I’ve Learned (and What’s Next)</h3><p>Writing Ezzi has been a wild ride. On the technical side, I went from 0 to 100 with desktop app dev, grappling with packaging Electron apps for three OSes, dealing with weird bugs, and integrating everything from database listeners to AI APIs.</p><p>I also learned that <strong>AI coding assistants are not a replacement</strong> for understanding your own codebase - they’re tools, not teammates. I had to rewrite large chunks of AI-generated code to make it maintainable. In a way, cleaning up the codebase was cathartic and made me a better developer.</p><p>From the perspective of the tech industry, building (and now releasing) Ezzi has reinforced my belief that technical interviews need reform. When tools like this exist - and people feel compelled to use them - it’s a sign that the interview format isn’t working. My stance is that learning and cheating often use the same tools; the difference is intent.</p><p>I built Ezzi primarily as a learning aid and confidence booster. Yes, it can be used to cheat in an interview - and I’m not here to moralize or encourage that - but I know realistically some will use it that way.</p><p>My hope is that interviewers and companies will focus more on practical assessments and maybe even embrace AI assistance as a positive (after all, a developer who can effectively use AI tools might be more productive on the job). If Ezzi sparks that conversation, I’d consider it a success beyond just the software.</p><p>On a lighter note, I also got a crash course in what not to do when launching a side project. Don’t leave keys in your repo 😅, don’t assume “no one will ever try X” because someone will, and definitely do invest in good documentation early. I spent some extra time writing docs so that new users and contributors can get onboarded easily, and that has already paid off.</p><p>So, what’s next for Ezzi? Now that it’s open source, I’m inviting the community to help shape its future. The roadmap includes several exciting features: <strong>hint-only mode</strong> for gradual assistance, <strong>LeetCode</strong> crusher for focused <strong>practice sessions</strong>, <strong>quiz mode</strong> for quick multiple-choice questions, and <strong>take-home algorithm support</strong>.</p><p>I’m sure the community will have even more creative ideas. If you’re an engineer who finds this project interesting, check out <a href="https://github.com/GetEzzi/ezzi-app">the GitHub repo</a>.</p><h3>How to Get Ezzi</h3><pre># Build &amp; run Ezzi in dev mode (Mac / Linux)<br><br>git clone https://github.com/GetEzzi/ezzi-app.git<br><br>cd ezzi-app &amp;&amp; npm i &amp;&amp; npm run dev</pre><p>Ezzi’s source code is available on GitHub, and the README has instructions for both using the managed backend and running your own.</p><p>To try it out:</p><ul><li>Download the Ezzi app source code from the GitHub. It’s an Electron app, so one build works on your OS.</li><li>Decide on self-hosted vs cloud - If self-hosting, follow the docs to deploy the backend (choose your favorite stack). If using cloud, simply create a free account via the app.</li><li>Launch the app and log in or connect - On self-host, you’ll enter the URL of your backend and your API key. On cloud, just log in and choose a plan (there’s a free trial mode as well).</li><li>Start a practice session - maybe open a LeetCode problem or some coding prompt, and hit the hotkeys to capture it. Play around with getting hints and solutions. The UI has tooltips explaining each button and shortcut.</li></ul><p>I’m both excited and nervous to see Ezzi out in the wild. Excited because I truly believe it can help people learn faster and feel more confident in interviews.</p><p>Nervous because, well, anything touching the interview-cheating topic can be controversial. But open-sourcing it is the right move - it brings transparency and opens it up for improvement.</p><p>In closing, if you’ve ever felt that knot in your stomach before a technical interview, or found yourself blanking on a binary tree problem you know you studied, Ezzi might be the kind of backup you’d want. It’s like an AI pair-programmer for your interviews - there if you need it. I built it out of frustration, curiosity, and a desire to make the tech interview grind a little more human (ironically, by using AI).</p><p>GitHub: <a href="https://github.com/GetEzzi/ezzi-app">github.com/GetEzzi/ezzi-app</a></p><p>Website: <a href="http://getezzi.com">getezzi.com</a></p><p>I hope this was helpful. Good luck, and happy engineering!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a1963a8fe0f3" width="1" height="1" alt=""><hr><p><a href="https://levelup.gitconnected.com/building-ezzi-an-invisible-tech-interview-assistant-a1963a8fe0f3">Building Ezzi: My Journey Creating an Invisible Tech Interview Assistant (Now Open Source)</a> was originally published in <a href="https://levelup.gitconnected.com">Level Up Coding</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Laravel — Discover Application Layers for Testing]]></title>
            <link>https://levelup.gitconnected.com/laravel-discover-application-layers-for-testing-42c9f35ddea6?source=rss-eb0dbd32d4c4------2</link>
            <guid isPermaLink="false">https://medium.com/p/42c9f35ddea6</guid>
            <category><![CDATA[web-development]]></category>
            <category><![CDATA[php]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[laravel]]></category>
            <dc:creator><![CDATA[Dmitry Khorev]]></dc:creator>
            <pubDate>Fri, 10 Nov 2023 14:05:20 GMT</pubDate>
            <atom:updated>2023-11-10T14:05:20.316Z</atom:updated>
            <content:encoded><![CDATA[<h3>Laravel — Discover Application Layers for Testing</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/1*r9ht8WQHHzNyTIlnO4b-9Q.png" /><figcaption>Laravel — Discover Application Layers for Testing</figcaption></figure><p>In the domain of web development, particularly within the Laravel framework, the structure of an application is intrinsically layered.</p><p>Understanding these layers is pivotal for Laravel developers, as they dictate how applications behave, scale, and maintain efficiency. The granularity of these layers is not arbitrary; it’s chosen based on the project’s scope, the developers’ proficiency, and the specific goals to be achieved.</p><p>Laravel, known for its elegant syntax and robust features, empowers developers to tailor their application architecture to the project’s demands.</p><h4><strong>Streamlining Development with Laravel</strong></h4><p>Developers can adopt a lean approach when crafting simpler applications or microservices with Laravel. In such scenarios, it’s acceptable to prioritize development speed over stringent adherence to certain “best practices.”</p><p>However, when scaling to a more monolithic structure, or when the application has to meet rigorous industry standards, Laravel’s architectural patterns offer the necessary sophistication. These standards are particularly relevant when considering data integrity and availability.</p><figure><img alt="Simple vs. Complex Laravel application layered architecture." src="https://cdn-images-1.medium.com/max/960/1*j9CW_QLc4kwID5u0V2Zg5A.png" /><figcaption>Simple vs. Complex Laravel application layered architecture.</figcaption></figure><p>Laravel applications typically consist of some of the following layers:</p><ol><li><strong>Model Layer</strong>: Central to Laravel’s MVC (Model-View-Controller) architecture, the Model Layer interfaces with the database using Eloquent ORM, which simplifies data handling by representing database tables as classes.</li><li><strong>Repository (or Interface) Layer</strong>: While not a default Laravel architecture, the Repository Pattern can be implemented to provide a further abstraction layer over Eloquent. This facilitates testing and swapping the ORM if needed.</li><li><strong>Service Layer (Actions, Queries, Commands)</strong>: encapsulates the application’s business logic, allowing interaction with the Model Layer or Data Transfer Objects (DTOs). Service classes in Laravel help in organizing business logic in a reusable and maintainable way.</li><li><strong>Controller Layer</strong>: is where HTTP requests are processed. Controllers in Laravel validate user input and leverage services to execute business logic, adhering to the principles of thin controllers and fat models.</li><li><strong>Middleware Layer</strong>: middleware intercepts HTTP requests to perform various tasks, such as authentication and logging, before the request hits the application or before the response is returned to the user.</li><li><strong>View Layer</strong>: is tailored for API responses, where it manages the structure and delivery of data to clients. Leveraging Laravel’s resource classes, it provides a fluent and flexible way to transform models into JSON. This careful orchestration ensures that clients receive well-formed and consistent data payloads, crucial for robust API design and client-side integration.</li><li><strong>Background Tasks Layer (Cron, Events/Listeners, Queue, Jobs)</strong>: scheduler, event system, and queue management handle background tasks, allowing developers to schedule commands, listen for events, and queue jobs for asynchronous processing.</li></ol><p>As we delve deeper into Laravel’s layered architecture, we will illuminate the roles and responsibilities of each layer and discuss the testing strategies that ensure their reliability and performance. By dissecting these layers, Laravel developers can build applications that are not only functionally rich but are also scalable and maintainable.</p><h4>Model Layer</h4><figure><img alt="Model testing patterns" src="https://cdn-images-1.medium.com/max/844/1*b1RnRmx9hjiwEbUrLTK9BQ.png" /><figcaption>Model testing patterns.</figcaption></figure><p>While it might be tempting to think that Laravel’s Eloquent models don’t require much testing, having a comprehensive test suite for your models (and factories indirectly) can save you time and headaches in the long run.</p><p>Even though Laravel and the Eloquent ORM are robust, tests assure that your additional configurations and relations are behaving as expected.</p><h4>Repository Layer</h4><p>Some engineers advocate for a repository layer as it can abstract the data access logic from the business logic. This layer acts as a bridge between the model and the business logic, which in theory, would simplify migration were you to switch ORMs mid-project. However, it’s important to note that Laravel comes with a single ORM — Eloquent — and does not natively support or expect you to switch out of it.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/834/1*bv2hE0XgR_akHJW2ESHuhA.png" /><figcaption>ORM options for Laravel.</figcaption></figure><p>The repository pattern is not without its merits and drawbacks. Here is a concise examination of its implications:</p><figure><img alt="Pros and cons of repository layer in Laravel." src="https://cdn-images-1.medium.com/max/1024/1*AbadGAtJ2ue9buIkCpGaJw.png" /><figcaption>Pros and cons of repository layer in Laravel.</figcaption></figure><p><strong>Advantages of a Repository Layer</strong></p><ol><li><strong>Decoupling</strong>: It separates the business logic from the data access layers, enhancing the maintainability of the codebase.</li><li><strong>Testability</strong>: By using repositories, you can mock dependencies and write more isolated and focused tests.</li><li><strong>Single Responsibility Principle</strong>: This design pattern is in line with the Single Responsibility Principle, ensuring each class has only one reason to change, thereby separating concerns effectively.</li></ol><p><strong>Disadvantages of a Repository Layer</strong></p><ol><li><strong>Complexity</strong>: Implementing this pattern can add unnecessary complexity to the application, which might be unwarranted for smaller projects.</li><li><strong>Leakage of Concerns</strong>: Fully abstracting the ORM is a challenge, and often Eloquent features end up being used directly, undermining the purpose of the pattern.</li><li><strong>Learning Curve</strong>: For those unfamiliar with the pattern, it introduces an additional layer of learning, atop mastering Laravel and Eloquent.</li><li><strong>Loss of ORM Features</strong>: Eloquent’s advanced features for handling relationships and other ORM-related tasks can be hindered by adding a repository layer.</li><li><strong>Potential Over-Engineering</strong>: For many applications, the capabilities of Eloquent alone are adequate, and a repository layer could be considered an over complication.</li></ol><p><strong>Weighing Your Options</strong></p><p>When deciding whether to introduce a repository layer to a Laravel application, the decision should be meticulously weighed. The perceived benefits of abstraction and testability must be balanced against the potential for increased complexity and the risk of losing out on some of the powerful features that Eloquent ORM provides.</p><p>For Laravel developers, the litmus test for introducing a repository layer is often the scale and expected growth of the application. Will the future of the application benefit from the abstraction, or will the immediate overhead outweigh the long-term benefits?</p><p>This decision must be informed by both the current state of your project and its trajectory. It is essential to consider your team’s familiarity with the Laravel framework and their ability to implement and maintain a repository pattern successfully.</p><h4>Service Layer</h4><p>As we delve deeper into the architectural components of Laravel, it becomes clear that the service layer holds a pivotal role. It is where the complex business logic and behavior of the application reside. Given its importance, the service layer is often a focus for extensive testing to guarantee the system operates as intended.</p><p>Testing in Laravel is an area where practicality should guide the process. Not every aspect requires the same level of scrutiny, but certain elements within the service layer demand thorough attention due to their critical impact on the application’s overall functionality and reliability.</p><p><strong>Prioritizing Tests for the Service Layer</strong></p><figure><img alt="Prioritizing Tests for the Service Layer in Laravel." src="https://cdn-images-1.medium.com/max/1024/1*8v-53VlLYh_FkJgaDnMlIw.png" /><figcaption>Prioritizing Tests for the Service Layer in Laravel.</figcaption></figure><p>When it comes to testing the service layer, there are key areas that you should prioritize to ensure a solid foundation for your application:</p><ol><li><strong>Core Business Logic</strong>: This is the heart of your service layer. Tests should confirm that all fundamental operations, calculations, and logical conditions are working correctly.</li><li><strong>Database Interactions</strong>: Given that services often handle data persistence, it’s crucial to test all Create, Read, Update, and Delete (CRUD) operations. These tests are vital for ensuring data integrity.</li><li><strong>Data Transformation</strong>: If your service layer is responsible for transforming data — whether from database models to Data Transfer Objects (DTOs) or vice versa — tests should verify that this transformation is accurate.</li><li><strong>Error Handling</strong>: Your service layer should gracefully handle errors and exceptional cases. Testing how the layer responds to unexpected scenarios is imperative to maintain stability.</li></ol><p><strong>Strategically Skipped Tests</strong></p><p>In testing, efficiency is as valuable as thoroughness. Knowing what to skip can save time and resources while still maintaining a high-quality test suite:</p><ol><li><strong>Integration with Other Services</strong>: Within the service layer, you can skip testing the integration with external services or systems. Laravel provides an elegant way to mock such interactions, allowing you to simulate how your service layer would communicate with these external dependencies. This focus ensures that your tests remain within the scope of the service layer’s responsibilities and do not get bogged down by the intricacies of external services.</li></ol><p>By emphasizing these areas, your testing efforts are directly aligned with the critical responsibilities of the service layer. Such a pragmatic and focused approach fosters a robust and maintainable test suite. It allows for rigorous verification of the service layer’s functionality while also keeping development agile and resource-efficient.</p><p><strong>Conclusion</strong></p><p>The strategy for testing a Laravel application’s service layer is not to cast as wide a net as possible, but rather to target the tests intelligently. By honing in on the most crucial components and interactions, and by leveraging Laravel’s features for simulating external dependencies, developers can create a test suite that is both comprehensive and efficient. It’s about ensuring quality where it matters most, while enabling the development process to remain lean and focused.</p><h4>Controller Layer</h4><p>The controller layer serves as a crucial nexus, orchestrating the flow between the user interface and the underlying business logic. Testing this layer is integral to ensuring that the user’s requests yield the right responses and that all the pieces of the application communicate seamlessly.</p><p><strong>The Imperative of Testing the Controller Layer</strong></p><p>The controller layer is the gatekeeper of your application’s HTTP interface, responding to user input, and marshaling resources to provide the desired output. Testing this layer is about validating the glue that binds the components of your application into a coherent, functioning whole.</p><p><strong>Optimizing Controllers for Clarity and Maintainability</strong></p><p>Aim for controllers that eschew bloat and focus sharply on their intended purpose:</p><ol><li><strong>Cleaner Controllers</strong>: Strive for controllers that are streamlined to address HTTP-specific concerns. This approach not only enhances readability but also simplifies maintenance, making your application more adaptable to change.</li></ol><figure><img alt="Optimizing Controllers for Clarity and Maintainability" src="https://cdn-images-1.medium.com/max/1024/1*a4-itUhXKOHjxpD10xgl1Q.png" /><figcaption>Optimizing Controllers for Clarity and Maintainability in Laravel.</figcaption></figure><p><strong>Core Areas for Controller Layer Testing</strong></p><p>Controller testing in Laravel should target several key aspects to ensure comprehensive coverage:</p><ol><li><strong>Integration with Service Layer</strong>: Controllers should act as a thin layer that delegates business logic to the services. Tests need to confirm that controllers are correctly invoking services and handling the results as expected.</li><li><strong>Access Control</strong>: Security is paramount. Your tests should verify that authorization and role-based controls are functioning correctly, and that routes intended to be protected are indeed inaccessible to unauthorized users.</li><li><strong>Input Validation</strong>: The integrity of the data passing through your controllers is fundamental. Tests should be in place to ensure that user inputs are properly validated before they reach the core of your application.</li><li><strong>Authentication</strong>: The robustness of authentication mechanisms cannot be overstated. Tests should ascertain that these systems are correctly recognizing valid users and effectively repelling unauthorized attempts.</li></ol><p><strong>Conclusion</strong></p><p>Testing the controller layer in Laravel is less about examining of internal logic and more about ensuring the correctness of interactions and integrations. It’s a process that guarantees the user’s journey through your application is smooth and secure. By focusing on these key areas, developers can craft controllers that are not just functional, but also resilient and ready for the challenges of a dynamic web environment.</p><h4>Middleware Layer</h4><p>In the bustling digital ecosystem of a Laravel application, middleware stands as an unobtrusive yet vital guardian, orchestrating the smooth processing of HTTP requests and responses. It’s a layer that may not engage in the spotlight of business logic, but it is indispensable for maintaining the application’s technical integrity and operational efficiency.</p><p>Middleware in Laravel isn’t about flashy features; it’s about the essential, often invisible tasks that ensure your application remains secure, responsive, and well-maintained</p><p><strong>Common Middleware Use Cases</strong></p><figure><img alt="Common Middleware Use Cases in Laravel." src="https://cdn-images-1.medium.com/max/1024/1*sR1eJiIh46SgZsPi5wwhlQ.png" /><figcaption>Common Middleware Use Cases in Laravel.</figcaption></figure><p>The scope of middleware is vast, encompassing several non-negotiable aspects of modern web applications:</p><ol><li><strong>Rate Limiting</strong>: It’s the sentry that ensures your application can handle the influx of requests without compromising service.</li><li><strong>CORS Handling</strong>: This function is the diplomat, determining which cross-origin requests are permissible, safeguarding against potential security issues.</li><li><strong>Caching</strong>: Middleware acts as the efficient librarian, storing and recalling data to speed up response times.</li><li><strong>Debugging</strong>: Think of this as the detective in your application, gathering clues to troubleshoot issues effectively.</li></ol><p><strong>Middleware Testing</strong></p><figure><img alt="Middleware Testing in Laravel." src="https://cdn-images-1.medium.com/max/1024/1*L2z-2e5NdrOcAm855SgXKA.png" /><figcaption>Middleware Testing in Laravel.</figcaption></figure><p>Testing the middleware layer is like checking the reflexes of your application — it must respond correctly under various conditions:</p><ol><li><strong>Logging</strong>: Tests must ensure that activities within the application are being recorded accurately, helping you keep a vigilant eye on operations and issues.</li><li><strong>Metrics Collection</strong>: Just as a doctor monitors vital signs, your tests should confirm that key performance metrics are being duly noted for analysis.</li><li><strong>Request Routing</strong>: Ensuring that the application’s traffic control system is directing requests properly is akin to testing the reliability of a city’s traffic lights.</li><li><strong>Authentication and Authorization</strong>: Occasionally, middleware takes on the role of a bouncer, deciding who gets in and who doesn’t, a critical aspect that must be tested for robust security.</li></ol><p><strong>Conclusion</strong></p><p>Testing the middleware layer in Laravel is critical. It keeps the application secure and efficient by managing requests and maintaining performance. Solid middleware testing ensures a stable and secure application foundation.</p><h4>View Layer</h4><p>In Laravel, the view layer holds a significant role, especially in shaping how data is presented to the end-users. Testing this layer is paramount in API-driven applications, where the correctness and structure of the output are as critical as the back-end processing.</p><p><strong>What to Test in the View Layer</strong></p><figure><img alt="What to Test in the View Layer in Laravel." src="https://cdn-images-1.medium.com/max/1024/1*xaEde00KFLp2NMFN8qTgdw.png" /><figcaption>What to Test in the View Layer in Laravel.</figcaption></figure><ol><li><strong>API Resources</strong>: Ensure that the API outputs are presenting data in the correct structure and include all necessary fields. Pay attention to pagination and other factors that affect the stability and performance of the application.</li><li><strong>Response Validation</strong>: It’s essential to verify that the application’s responses are as expected, regardless of the complexities or operations performed in the back-end. The end-user’s experience is defined by the accuracy and consistency of these responses.</li></ol><p>The goal is to ensure that any changes to the application’s internal logic do not adversely affect the user’s experience. By diligently testing the view layer, you affirm that the application reliably communicates with clients, maintaining the contract expected by API consumers. This step is vital in sustaining the application’s functionality and user trust.</p><h4>Background Tasks Layer</h4><p>The Background Tasks Layer is indispensable for handling asynchronous operations and ensuring scalability. This includes managing events, queues, and scheduled tasks, which, if properly utilized, greatly enhance the application’s performance and responsiveness.</p><p><strong>What to Test in the Background Tasks Layer</strong></p><figure><img alt="What to Test in the Background Tasks Layer in Laravel." src="https://cdn-images-1.medium.com/max/1024/1*WZM0VqYOFJXf0qFSlDwUFQ.png" /><figcaption>What to Test in the Background Tasks Layer in Laravel.</figcaption></figure><ol><li><strong>Cron Jobs</strong>: Verify that cron jobs are triggering at their designated times and performing their intended tasks correctly.</li><li><strong>Queued Jobs (Regular)</strong>: Check that queued jobs are being processed and completed successfully.</li><li><strong>Event Handling</strong>: Test that events trigger the right listeners and that those listeners respond correctly.</li><li><strong>Database Integrity</strong>: For tasks that modify the database, it’s essential to confirm that these alterations are correctly implemented.</li><li><strong>Asynchronous Processes</strong>: Evaluate the sequence and execution of asynchronous operations, particularly in multi-step workflows.</li></ol><p>With Laravel’s robust testing features, developers can effectively simulate background processes, ensuring these critical tasks perform reliably and contribute to the overall efficiency of the application.</p><h4>Conclusion</h4><p>In Laravel, implementing and testing various layers — from models to middleware to background tasks — ensures a robust, maintainable, and scalable application.</p><p>Thorough testing across these layers not only fortifies data integrity and application performance but also streamlines future development.</p><p>Ultimately, it’s the careful balance of structure and testing that lays the foundation for a Laravel application’s long-term success.</p><p>I hope this was helpful. Good luck, and happy engineering!</p><p>If you are a visual learner, check out my <a href="https://www.udemy.com/course/laravel-10-unit-feature-test-advanced-practices-2023/">free Udemy course</a> that covers Laravel Testing topics.</p><p>For part one of <a href="https://medium.com/gitconnected/laravel-test-suite-best-practices-38b143dbccce">Laravel Testing series check here</a>.</p><p>For part two of <a href="https://medium.com/gitconnected/optimizing-your-laravel-test-suite-paradigm-shifts-f31f4a374ce1">Laravel Testing series check here.</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=42c9f35ddea6" width="1" height="1" alt=""><hr><p><a href="https://levelup.gitconnected.com/laravel-discover-application-layers-for-testing-42c9f35ddea6">Laravel — Discover Application Layers for Testing</a> was originally published in <a href="https://levelup.gitconnected.com">Level Up Coding</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Optimizing Your Laravel Test Suite: Paradigm Shifts]]></title>
            <link>https://levelup.gitconnected.com/optimizing-your-laravel-test-suite-paradigm-shifts-f31f4a374ce1?source=rss-eb0dbd32d4c4------2</link>
            <guid isPermaLink="false">https://medium.com/p/f31f4a374ce1</guid>
            <category><![CDATA[php]]></category>
            <category><![CDATA[web-development]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[laravel]]></category>
            <dc:creator><![CDATA[Dmitry Khorev]]></dc:creator>
            <pubDate>Tue, 12 Sep 2023 13:33:11 GMT</pubDate>
            <atom:updated>2023-11-08T20:25:04.614Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/750/1*r9ht8WQHHzNyTIlnO4b-9Q.png" /><figcaption>Optimizing Your Laravel Test Suite: Paradigm Shifts.</figcaption></figure><p>In this article, we’ll discuss a variety of practices that can greatly enhance your test suite. Some of these are straightforward to implement, while others might require a paradigm shift in how you think about testing. These practices are particularly beneficial for larger projects, where development speed can often slow down as the application grows in complexity.</p><h3>Strict Checks &amp; Strong Typing</h3><p>Let’s talk about strict checks and strong typing, which I consider to be essential elements for any long-term software project.</p><h4>Strong Typing</h4><p>Strong typing doesn’t just help us prevent coding errors, it also acts as a form of documentation via method signature type hints. With strong typing in place, it becomes easier to understand the properties and types of objects you’re working with. This is particularly useful for IDEs, which can provide hints and catch issues quicker.</p><pre>&lt;?php<br><br>declare(strict_types=1);<br><br>namespace App\DTO;<br><br>use Spatie\DataTransferObject\DataTransferObject;<br><br>class UserRegistrationDto extends DataTransferObject<br>{<br>    public string $firstName;<br><br>    public string $lastName;<br><br>    public string $email;<br><br>    public string $password;<br>}</pre><p>Below is how you can handle a user registration Data Transfer Object (DTO) in a type-safe manner, thanks to strong typing:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/1*pzrAk68EuXQ3sIteywnogg.gif" /><figcaption>Handling a user registration Data Transfer Object (DTO) in a type-safe manner.</figcaption></figure><h4>Strict Checks</h4><p>Strict type declarations are a PHP-specific feature that I personally like to include in every new PHP file I create, even configuration files. I strongly advocate for this approach because it adds another layer of reliability to your code.</p><p>For instance, consider a situation where a type mismatch occurs in a user delete request; the DTO is supplying an integer, but the model is expecting a string. If strict types are not enabled, PHP will do a silent type coercion.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/1*JDbPqmKNe2CmjGnGDFtpAA.gif" /><figcaption>PHP silent type coercion before enabling strict types.</figcaption></figure><blockquote>In PHP, type coercion refers to the automatic or implicit conversion of values from one data type to another. This allows you to perform operations between different types without explicitly casting them. For example, when using the loose equality operator `==`, PHP may convert a string and an integer to the same type before performing a comparison (`”42&quot; == 42` evaluates to `true`). While convenient, type coercion can lead to unexpected behavior and bugs, especially when you’re dealing with functions that expect specific types or when using loose comparison operators.</blockquote><p>By adding declare(strict_types=1); at the top of your files, you disable PHP’s silent type coercion, thereby reducing potential errors.</p><blockquote><em>Note: Strict checks only apply to the file that includes the declaration. Importing a “strict” file into a non-strict file doesn’t make the latter strict.</em></blockquote><h4><strong>Strict Checks for Test Suite</strong></h4><p>For a more comprehensive layer of stability across your software project, you should also incorporate strict checks within your test suite.</p><p>The first step is to add declare(strict_types=1);at the top of each test case file.</p><pre>&lt;?php<br><br>declare(strict_types=1);<br><br>namespace Tests;<br><br>class AssertJsonTest extends TestCase<br>{}</pre><p>Laravel provides a variety of useful testing helpers that may seem similar but actually behave differently in subtle ways. Understanding these nuances is crucial when aiming for stricter type checking in your project.</p><p>Now, let’s take a closer look at how assertEquals() differs from assertSame():</p><p>The usage of assertEquals() can result in silent type casting, meaning you could be validating two different types without even realizing it.</p><pre>/** @test */<br>public function assertEquals_demo(): void<br>{<br>    $this-&gt;assertEquals(1, &#39;1&#39;);<br>    $this-&gt;assertEquals(1, &#39;1.0&#39;);<br>}</pre><p>Switching to assertSame() immediately exposes any type discrepancies and will cause the test to fail.</p><pre>/** @test */<br>public function assertSame_demo($actual): void<br>{<br>    $this-&gt;assertSame(1, &#39;1&#39;);<br>    $this-&gt;assertSame(1, &#39;1.0&#39;);<br>}</pre><blockquote>Failed asserting that ‘1’ is identical to 1.</blockquote><blockquote>Failed asserting that ‘1.0’ is identical to 1.</blockquote><p>For this reason, I highly recommend using more rigorous test methods like assertSame() wherever possible.</p><p>For value assertions, assertSame() is often the go-to method, while assertJson() is commonly used for validating HTTP responses during integration testing.</p><p>However, it’s crucial to understand the difference between assertJson() and its stricter counterpart, assertExactJson(). The former only validates that the expected keys and values exist in the API response, but it ignores any additional data returned. For instance, if your response unintentionally includes sensitive information like a user’s password, assertJson() wouldn’t catch that issue.</p><pre>/** @test */<br>public function assertJson_demo(): void<br>{<br>    $response = new TestResponse(<br>        new Response([<br>            &#39;name&#39; =&gt; &#39;Dmitry&#39;,<br>            &#39;age&#39; =&gt; 30,<br>            &#39;password&#39; =&gt; &#39;secret&#39;,<br>        ], 200)<br>    );<br><br>    $response-&gt;assertJson([<br>        &#39;name&#39; =&gt; &#39;Dmitry&#39;,<br>        &#39;age&#39; =&gt; 30,<br>    ]);<br>}</pre><p>Switching to assertExactJson() offers stricter validation; it ensures that the response contains only the specified data, helping you identify any unintentional data leaks immediately.</p><pre>/** @test */<br>public function assertExactJson_demo(): void<br>{<br>    $response = new TestResponse(<br>        new Response([<br>            &#39;name&#39; =&gt; &#39;Dmitry&#39;,<br>            &#39;age&#39; =&gt; 30,<br>            &#39;password&#39; =&gt; &#39;secret&#39;,<br>        ], 200)<br>    );<br><br>    $response-&gt;assertExactJson([<br>        &#39;name&#39; =&gt; &#39;Dmitry&#39;,<br>        &#39;age&#39; =&gt; 30,<br>    ]);<br>}</pre><pre>Failed asserting that two strings are equal.<br>&lt;Click to see difference&gt;</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1010/1*aEiiHdQSOUaXuV5QRyLyhQ.png" /><figcaption>Failed asserting that two strings are equal.</figcaption></figure><h3>SQLite vs. DB Testing</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/600/1*Sc_90nqC700P7vDPfsQ2MQ.png" /><figcaption>SQLite vs. DB Testing</figcaption></figure><p>By default, and as seen in numerous Laravel tutorials, testing against an in-memory SQLite database is a commonly recommended approach. This strategy is particularly useful for small demos, negating the need to set up additional databases or services. However, this approach comes with certain limitations that are critical to consider, especially if you’re not running SQLite in your production environment.</p><p>Key Differences Between SQLite and Traditional Databases like MySQL or PostgreSQL:</p><ol><li><strong>Foreign Key Constraints</strong>: In SQLite, foreign keys are disabled by default. Although Laravel 10+ has addressed this issue, it remains a concern for older projects. When testing with SQLite, you might overlook the absence of foreign key checks, creating a false sense of security.</li><li><strong>Dynamic Typing</strong>: SQLite employs dynamic typing, which allows you to store any value of any data type in any column, regardless of the column’s declared type. While this may appear flexible, it poses a risk of data integrity issues that could easily slip past your test suite.</li><li><strong>Database-Specific Syntax</strong>: If your application relies on database-specific syntax or functions, then testing with SQLite may not be viable at all.</li></ol><h4>Why These Differences Matter</h4><p>Understanding these nuances is critical for validating the reliability of your test suites. While SQLite testing is quick and convenient, it might not comprehensively simulate a real-world, production database environment. Therefore, it’s advisable to not rely solely on in-memory database testing for complex, real-world projects.</p><h4>DB Testing Performance</h4><p>You might initially be concerned about the performance hit when shifting from SQLite to a more robust database like MySQL for testing. However, the slight decrease in speed is often outweighed by the benefits of more accurate testing. For small to medium-sized projects, you’ll likely find that the performance difference is negligible.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/600/1*p5muPgTaqlJScozYiAmHkA.png" /><figcaption>DB Testing Performance — without schema dump.</figcaption></figure><p>For larger projects, bootstrapping may take a bit longer as you’ll need to run all the database migrations first. However, the overall runtime of the test suite remains fairly comparable, especially when you’re testing against an empty database.</p><p>To improve test suite performance, you can use features like schema dumping (php artisan schema:dump) to reduce boot time or consider parallel testing. Laravel 10+ allows parallel testing by default; for older projects, you might want to look into using the <a href="https://github.com/paratestphp/paratest">paratest</a> library.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/618/1*AeWB1xmIN-PtZCKDsKyIiw.png" /><figcaption>DB Testing Performance —with schema dump.</figcaption></figure><h4>Getting Started with Database Testing</h4><p>To shift your primary testing database driver to something like MySQL, you’ll need to make appropriate changes in your phpunit.xml configuration file.</p><pre>&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;<br>...<br>    &lt;php&gt;<br>        &lt;env name=&quot;APP_ENV&quot; value=&quot;testing&quot;/&gt;<br>        &lt;!-- &lt;env name=&quot;DB_CONNECTION&quot; value=&quot;sqlite&quot;/&gt;--&gt;<br>        &lt;!-- &lt;env name=&quot;DB_DATABASE&quot; value=&quot;:memory:&quot;/&gt;--&gt;<br>        &lt;env name=&quot;DB_CONNECTION&quot; value=&quot;mysql&quot;/&gt;<br>        &lt;env name=&quot;DB_DATABASE&quot; value=&quot;forge&quot;/&gt;<br>    &lt;/php&gt;<br>&lt;/phpunit&gt;</pre><p>By being aware of these limitations and performance considerations, you can make more informed choices about your database testing strategies, which will pay off in the long run.</p><h3>Array Cache vs. Redis Cache</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*og0rQ01xv6I7e1LpSbNIVQ.png" /><figcaption>Array Cache vs. Redis Cache</figcaption></figure><p>Testing Redis integration in a Laravel application can yield different outcomes compared to testing with Laravel’s default array cache driver. Understanding these differences is essential for ensuring that your application behaves as expected, especially if you intend to use Redis in production.</p><h4>Cache Value Retrieval Differences</h4><p>Here’s an example to illustrate the point. Imagine you store a float value in the cache and later attempt to retrieve it. Using Laravel’s array driver, you’ll get exactly the value you stored.</p><pre>/** @test */<br>public function integrity_with_cache(): void<br>{<br>    Cache::set(&#39;rate&#39;, 1.55);<br><br>    $this-&gt;assertSame(1.55, Cache::get(&#39;rate&#39;));<br>}</pre><p>However, the Redis driver behaves differently. When you retrieve a value from Redis, it returns as a string, regardless of its original data type, and the same test will fail due to the type mismatch.</p><blockquote>Failed asserting that ‘1.55’ is identical to 1.55.</blockquote><h4>Potential Issues and Solutions</h4><ol><li><strong>Type Errors</strong>: If your application relies on strict data types, this discrepancy can lead to TypeError. In other words, passing a string-retrieved value from Redis into a function that expects a float will trigger an error.</li><li><strong>Implicit Typecasting</strong>: When strict type checks are not enforced, PHP may implicitly cast the string to an integer, leading to unexpected behavior and even bugs that appear to be rooted in your business logic.</li></ol><p>Therefore, if your production environment utilizes Redis, it’s advisable to use Redis for your cache testing as well.</p><h4>Configuring PHPUnit for Redis Testing</h4><p>To begin using Redis as your primary cache driver in your PHPUnit tests, you’ll need to adjust your phpunit.xml configuration file appropriately.</p><pre>&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;<br>...<br>    &lt;php&gt;<br>        &lt;env name=&quot;APP_ENV&quot; value=&quot;testing&quot;/&gt;<br>        &lt;!-- &lt;env name=&quot;CACHE_DRIVER&quot; value=&quot;array&quot;/&gt;--&gt;<br>        &lt;env name=&quot;CACHE_DRIVER&quot; value=&quot;redis&quot;/&gt;<br>        &lt;env name=&quot;REDIS_DATABASE&quot; value=&quot;10&quot;/&gt;<br>    &lt;/php&gt;<br>&lt;/phpunit&gt;</pre><p>In summary, testing with a driver that you plan to use in production is crucial for catching subtle issues that may not surface when using Laravel’s array cache driver. The slight complexities of setting up your test environment to use Redis are outweighed by the benefit of knowing your application will behave as expected in a production setting.</p><h3>Working With Dates</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/159/1*LcIiDJbSYOkNKEIyV-L1_Q.png" /><figcaption>Working With Dates</figcaption></figure><p>Date-related issues in your test suite can lead to flaky and unreliable tests, a problem that is particularly amplified when using PHPUnit with Laravel. Let’s delve into common challenges and solutions for ensuring date reliability in your tests.</p><h4>Time-Dependent Tests</h4><p>Say you have a reporting system that relies on date filters. You’d naturally want to assert that data points fall within a certain date range. A common practice is to use Laravel’s now() helper function to generate these data points. However, as your test suite grows, the variable execution time can cause now() to drift, leading to inconsistent results.</p><p>To address this, one option is to lock the value of now() using Carbon’s setTestNow() method, which ensures that now() returns a consistent time throughout the test run.</p><pre>$now = now(); // this value is not fixed during test suite run<br>Carbon::setTestNow($now); // fixes now() to always return one value</pre><p>Alternatively, you could specify explicit dates for your tests, particularly in scenarios where dates are crucial, like in reporting or data aggregation.</p><pre>$now = CarbonImmutable::parse(&#39;2023-09-01 10:00:00&#39;);<br>Carbon::setTestNow($now); // fixes now() to always return one value</pre><p>Explicitly setting dates in your tests not only eliminates flakiness but also improves code readability, aiding comprehension for future maintenance.</p><h4>Daylight Saving Time (DST) Challenges</h4><p>Another unexpected source of test flakiness is Daylight Saving Time changes, which can affect time calculations in your tests. For example, let’s say your tests are pegged to Canada’s DST schedule.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/818/1*ro9oMQUs4BgQ2eFSPbbcnw.png" /><figcaption>UTC time differs by 1 hour.</figcaption></figure><p>If your test suite runs on different dates around the DST change, you might experience time calculation discrepancies. The solution? Again, use explicit dates and times in your tests, and keep them constant across the test case.</p><h4>App &amp; Database Timezone</h4><p>As an added layer of reliability, consider setting the default timezone to UTC in both your Laravel application and your database.</p><p>In PHP, you can verify the server’s timezone setting like so:</p><pre>echo date_default_timezone_get();</pre><p>Database timezone checks vary by engine. In a PostgreSQL database, you can check the current time zone setting by executing the following query:</p><pre>SHOW timezone;</pre><p>In MySQL, you can check the current time zone setting by executing the following query:</p><pre>SHOW GLOBAL VARIABLES LIKE &#39;time_zone&#39;;</pre><p>In summary, the key to a reliable, time-sensitive test suite is consistency. By explicitly setting date-time values and sticking to a standard timezone, you eliminate variables that can make your tests flaky and hard to debug.</p><p>I hope this was helpful. Good luck, and happy engineering!</p><p>If you are a visual learner, check out my <a href="https://www.udemy.com/course/laravel-10-unit-feature-test-advanced-practices-2023/">free Udemy course</a> that covers Laravel Testing topics.</p><p>For part one of <a href="https://medium.com/gitconnected/laravel-test-suite-best-practices-38b143dbccce">Laravel Testing series check here</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f31f4a374ce1" width="1" height="1" alt=""><hr><p><a href="https://levelup.gitconnected.com/optimizing-your-laravel-test-suite-paradigm-shifts-f31f4a374ce1">Optimizing Your Laravel Test Suite: Paradigm Shifts</a> was originally published in <a href="https://levelup.gitconnected.com">Level Up Coding</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Laravel Test Suite — Best Practices]]></title>
            <link>https://levelup.gitconnected.com/laravel-test-suite-best-practices-38b143dbccce?source=rss-eb0dbd32d4c4------2</link>
            <guid isPermaLink="false">https://medium.com/p/38b143dbccce</guid>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[php]]></category>
            <category><![CDATA[laravel]]></category>
            <category><![CDATA[web-development]]></category>
            <category><![CDATA[programming]]></category>
            <dc:creator><![CDATA[Dmitry Khorev]]></dc:creator>
            <pubDate>Thu, 11 May 2023 04:22:56 GMT</pubDate>
            <atom:updated>2023-11-08T20:24:38.529Z</atom:updated>
            <content:encoded><![CDATA[<h3>Laravel Test Suite — Best Practices</h3><figure><img alt="Passing tests for Laravel project with code coverage report." src="https://cdn-images-1.medium.com/max/1024/1*b8_ewGnQgCDoG1DFb1sbGQ.png" /><figcaption>Passing tests for Laravel project with code coverage report.</figcaption></figure><p>Laravel is one of the most popular PHP frameworks and is a good choice for PHP developers.</p><p>In this article, I want to share my approaches and practices to discovering Laravel framework testing potential. This approach has proven its efficiency on large projects that have grown over the years.</p><p>I will describe a few testing paradigms that might be mind-changing to you, but this is my experience based on running large Laravel projects.</p><p>It does not matter much if you start a green field project or join a team on a new project.</p><p>These are some ideas I want you to care about when building your test coverage.</p><p>The examples are in PHP and with the Laravel framework in mind, but you can translate those concepts into your preferred language. This approach is what works for me in PHP, Node.js, and TypeScript development.</p><h4>Future-proof tests</h4><figure><img alt="Future-proof tests for Laravel projects." src="https://cdn-images-1.medium.com/max/960/1*xLnrR0cmO4TKsvAG9wGLyg.png" /><figcaption>Future-proof tests for Laravel projects.</figcaption></figure><p>First is — “I want myself 6 months in the future to be able to exactly understand the logic behind test cases I write today”.</p><p>When we’re fully immersed in the task, we think everything we write is obvious, but in most cases, it is not.</p><p>Second — “I want my tests to serve as documentation to my controllers, services, and overall business logic behind it”.</p><p>This means I will use the service under test in a most verbose way.</p><p>Say we’re testing controllers — I want to immediately show off the available request options for that.</p><p>If I test a business logic layer, I want to show exactly the expected input format for my code.</p><p>This helps a lot in weakly typed languages like PHP or JavaScript.</p><p>Finally, I want my colleagues to onboard with my code faster, I want them to be going with their changes sooner than later. And understandable test cases help me achieve that.</p><p>So here are the topics for this article.</p><ul><li><strong>Test Suite Folder Structure</strong> — this is how I recommend organizing your test suite folders.</li><li><strong>Test Case Naming</strong> — my recommendations and a pattern for understandable test names.</li><li><strong>Test Case Flow </strong>— will describe how is best to structure individual test cases.</li><li><strong>Clear assertions</strong> — then I want to talk about assertions and use cases.</li><li><strong>Use Case Coverage vs. Statement Coverage</strong> — next, we’ll discuss the divergence between use case coverage and statement coverage.</li></ul><h4>Test Case Folder Structure</h4><p>Laravel proposes a fixed folder structure for all projects, for example, you will most likely have all your controllers under</p><p>\App\Http\Controllers folder with all your custom code under that folder.</p><figure><img alt="Example of Laravel project folder structure." src="https://cdn-images-1.medium.com/max/442/1*jFo8EW8AyWFeQBBDrzSPlg.png" /><figcaption>Example of Laravel project folder structure.</figcaption></figure><p>Some projects may do something different, for example, if they follow a modular approach and their folder structure can look like this.</p><figure><img alt="Example of Laravel project modular folder structure." src="https://cdn-images-1.medium.com/max/518/1*174kijuwY45tCbHzrz53sQ.png" /><figcaption>Example of Laravel project modular folder structure.</figcaption></figure><p>With either approach, you are required by PSR-4 standard (<a href="https://www.php-fig.org/psr/psr-4/">https://www.php-fig.org/psr/psr-4/</a>) to have a class namespace replicate a folder structure in the file system.</p><figure><img alt="PSR-4 standard overview for Laravel project." src="https://cdn-images-1.medium.com/max/884/1*ybCH-6zMx91QFh5IJztfzw.png" /><figcaption>PSR-4 standard overview for Laravel project.</figcaption></figure><p>This is also how the composer will traverse your source directory and build a class map, if you do not follow PSR-4 you’ll be in trouble anyways.</p><p>When creating the test suite you want to follow the same pattern.</p><ul><li>The root namespace for your test would be either Unit, Feature, or Integration.</li><li>Followed by a fully replicated folder structure of a class under tests.<br>- \App\Unit\Http\Controllers<br>- \App\Feature\Http\Controllers<br>- \App\Integration\Http\Controller</li><li>Then you name the test case file the same as your class with the “Test” suffix. This is a default suffix for PHPUnit to pick up your tests and run them.<br>- UserControllerTest</li></ul><p>If your IDE allows — you can automate the creation of your test case files.</p><figure><img alt="PHPStorm for Laravel development example." src="https://cdn-images-1.medium.com/max/800/1*UUuhpOUL-9Ochet-g4s8JA.gif" /><figcaption>PHPStorm for Laravel development example.</figcaption></figure><p>If you make a mistake in creating namespaces you’ll get a hint and the possibility to fix it immediately.</p><figure><img alt="PHPStorm for Laravel development example." src="https://cdn-images-1.medium.com/max/800/1*dvg1ufG5ZkeN1GDQRuakgA.gif" /><figcaption>PHPStorm for Laravel development example.</figcaption></figure><p>Why do I propose this structure?</p><p>This way your test files have the expected folder structure, anyone looking over the code can locate the test files by following the same path in the tests folder.</p><p>Another benefit is when you browse your files via global search you can easily tell which test file is responsible for Unit and Feature test cases by reading its namespace in the search dialog.</p><figure><img alt="PHPStorm for Laravel development example." src="https://cdn-images-1.medium.com/max/800/1*eJ-ZA61N6iRjA8gM3ieEGQ.gif" /><figcaption>PHPStorm for Laravel development example.</figcaption></figure><h4>Test Case Naming</h4><p>Choosing a good test case name is hard if you don’t follow a pattern. Reading poorly named tests is also hard, so help yourself and your colleagues to create readable and understandable test names.</p><p>Once you’re familiar with the common patterns it becomes easy. In this section, I will describe my recommended approach to test case naming</p><figure><img alt="Overview of test case naming best practices." src="https://cdn-images-1.medium.com/max/914/1*aMpwamTJ6esM9f51S0903g.png" /><figcaption>Overview of test case naming best practices.</figcaption></figure><p>Points:</p><ol><li><strong>Select a case pattern and stick with it</strong> — snake case or camel caps case.<br>- I prefer the snake case because it differs from the camel case I use in regular code, so I can quickly tell the difference.</li><li><strong>Have descriptive names</strong><br>- don’t be afraid to be too verbose, it’s great to be so for the test suite.<br>- have inputs and expected output in the test’s name.</li><li><strong>Select a naming pattern and keep it consistent</strong><br>- some approaches are to use keywords like “it”, “test”, and “when*”.</li></ol><p>Now let’s go over some patterns for naming. And will review it with an example.</p><figure><img alt="An example method for testing in Laravel." src="https://cdn-images-1.medium.com/max/960/1*MmuObsp-7Yctze-H21-UJg.png" /><figcaption>An example method for testing in Laravel.</figcaption></figure><p>We have a Math class with an add method. It accepts 2 arguments and returns the sum.</p><p>My favorite test case naming pattern is this:</p><blockquote>&lt;ACTION&gt; when &lt;CONDITION&gt; then &lt;RESULT&gt;</blockquote><p>This can be written using snake case in the PHPUnit file</p><pre>/* @test */<br>add_when_input_is_2_and_1_then_result_is_3(): void {}</pre><p>and it translates to the following:</p><ul><li><strong>ACTION </strong>— is usually our method for test</li><li><strong>CONDITION </strong>is where we describe test inputs</li><li>and <strong>RESULT </strong>is where we describe expected outputs</li></ul><p>Between those points, we have 2 keywords — WHEN and THEN</p><p>Or if you work with JavaScript, Node.js, or Pest for PHP then it can read something like this:</p><pre>describe(‘add’) { …<br>test(‘when 2 and 1 then result is 3’) {}</pre><p>Another popular pattern for writing test names is like this:</p><blockquote>it &lt;RESULT&gt; when &lt;ACTION&gt;&lt;CONDITION&gt;</blockquote><p>This is most commonly seen in JavaScript tutorials, Node.js tutorials, or when using Pest for PHP projects (smaller ones).</p><p>I prefer the first option for PHP and use the test() helper if I work with JS/Node.js.</p><pre>/* @test */<br>add_when_input_is_2_and_1_then_result_is_3(): void {}<br><br>test(&#39;when 2 and 1 then result is 3&#39;) {}</pre><h4>Test Case Flow</h4><figure><img alt="Common Laravel test case workflow stages." src="https://cdn-images-1.medium.com/max/948/1*rQqlQprpOfxTNPFZ_PNoYA.png" /><figcaption>Common Laravel test case workflow stages.</figcaption></figure><p>Having a structured pattern for all of your tests is a benefit too. To create a common structure for each of my test cases, I usually split the test code into stages. Each stage is responsible for 1 aspect of the test case — <em>arrange, act, or assert</em>.</p><p><strong>THE ARRANGE</strong> stage is where you prepare your environment to execute the test, create models, set variables, and so on.</p><p><strong>THE ACT</strong> stage is where your class or method under test will be called.</p><p>This will usually include any related objects that are important for the context — like request body, data transfer object instantiation, and so on.</p><p><strong>THE ASSERT</strong> stage is where you check if your case was a success or not by writing up the assertions for the test to pass or fail.</p><figure><img alt="Common Laravel test case workflow stages with description." src="https://cdn-images-1.medium.com/max/948/1*mvXNJix_qW5WhTUFHPo1dw.png" /><figcaption>Common Laravel test case workflow stages with description.</figcaption></figure><p>3 flow patterns will cover almost 100% of writing test cases.</p><p>Straightforward with arrange/act/assert.</p><pre>/**<br> * We&#39;re testing the run() method of the OfferTimeoutService class.<br> * When the offer is updated more than 10 minutes ago, it should become inactive.<br> * @test<br> */<br>public function run_when_offers_is_not_updated_for_10_minutes_then_it_will_be_deactivated(): void<br>{<br>    // arrange<br>    $config = [&#39;dataserver.ads.ad_active_timeout_seconds&#39; =&gt; 600];<br>    $offer = new Offer(isActive: true, updatedAt: new DateTime(&#39;-11 minutes&#39;));<br><br>    // act<br>    $service = new OfferTimeoutService($offer, $config);<br>    $result = $service-&gt;run();<br><br>    // assert<br>    $this-&gt;assertFalse($result-&gt;isActive);<br>}</pre><p>Arrange first, assert with mock, and act.</p><pre>/**<br> * @test<br> */<br>public function run_when_config_is_empty_then_throws_an_exception(): void<br>{<br>    // arrange<br>    $config = [];<br>    $offer = new Offer(isActive: true, updatedAt: new DateTime(&#39;-11 minutes&#39;));<br><br>    // assert<br>    $this-&gt;expectException(\RuntimeException::class);<br><br>    // act<br>    $service = new OfferTimeoutService($offer, $config);<br>    $service-&gt;run();<br>}</pre><p>Just an assert and act (typically for exception testing).</p><pre>/**<br> * @test<br> */<br>public function run_when_offer_is_null_then_throws_an_exception(): void<br>{<br>    // assert<br>    $this-&gt;expectExceptionMessage(&#39;Offer is not set&#39;);<br><br>    // act<br>    $service = new OfferTimeoutService(offer: null, config: null);<br>    $service-&gt;run();<br>}</pre><p>We can also remove an ARRANGE stage comment if it’s going first.</p><pre>/**<br> * @test<br> */<br>public function run_when_config_is_empty_then_throws_an_exception(): void<br>{<br>    $config = [];<br>    $offer = new Offer(isActive: true, updatedAt: new DateTime(&#39;-11 minutes&#39;));<br><br>    // assert<br>    $this-&gt;expectException(\RuntimeException::class);<br><br>    // act<br>    $service = new OfferTimeoutService($offer, $config);<br>    $service-&gt;run();<br>}</pre><p>Another point here is you don’t want to overdo it. If your test case is 2–3 lines long just having a blank line separation is good enough. Keeping in mind that you still have ARRANGE ACT and ASSERT stages.</p><pre>/** @test */<br>public function run_when_offer_is_null_then_throws_an_exception(): void<br>{<br>    $this-&gt;expectExceptionMessage(&#39;Offer is not set&#39;);<br><br>    $service = new OfferTimeoutService(offer: null, config: null);<br>    $service-&gt;run();<br>}</pre><p>Here are the building blocks for your test cases. You have 3 workflows that will cover most of your use cases.</p><figure><img alt="Recap for common Laravel test case workflow stages and their combinations." src="https://cdn-images-1.medium.com/max/948/1*9Lv3tuRN0ODzwFUtNFstYA.png" /><figcaption>Recap for common Laravel test case workflow stages and their combinations.</figcaption></figure><h4>Clear Assertions</h4><p>When something breaks you want yourself 6 months from now to know exactly what the problem is and not spend much time understanding old code.</p><p>We already discussed the test case naming pattern and test case structure. The final argument in this equation is creating as clear assertions as possible and keeping them focused on 1 thing at a time.</p><p>I prefer my test cases to assert only 1 thing at a time. This correlates to a software engineering principle known as Single Responsibility.</p><p>This does not mean you only ever have 1 assertion line in your test, but rather your assertions are focused on demonstrating only 1 use case.</p><figure><img alt="Single Responsibility principle in testing Laravel projects." src="https://cdn-images-1.medium.com/max/828/1*PsLKx1djejEgMzXKwUP5qg.png" /><figcaption>Single Responsibility principle in testing Laravel projects.</figcaption></figure><p>An example of this approach would be this test case.</p><pre>/** @test */<br>public function api_offers_data(): void<br>{<br>    (new Offer(isActive: true, updatedAt: new DateTime()))-&gt;count(3)<br>        -&gt;create();<br><br>    // act<br>    $this-&gt;actingAs(&#39;admin&#39;);<br>    $response = $this-&gt;getJson(&#39;/api/offers&#39;);<br><br>    // assert<br>    $response-&gt;assertStatus(200);<br>    $response-&gt;assertJsonStructure([<br>        &#39;data&#39; =&gt; [<br>            [<br>                &#39;count&#39;,<br>                &#39;price&#39;,<br>                &#39;isActive&#39;,<br>                &#39;updatedAt&#39;,<br>            ],<br>        ],<br>    ]);<br>    $data = $response-&gt;decodeResponseJson()-&gt;json();<br>    $this-&gt;assertSame(3, $data[&#39;count&#39;]);<br>}</pre><p>It has a really bad name for a test, is not helpful at all, and tests too many things at once</p><ul><li>the HTTP response code</li><li>the response structure</li><li>and the actual data</li></ul><p>In your application, you can have a bug on different levels of abstraction and you want only a specific test, dedicated to this level, to break.</p><ul><li>Response code validation usually refers to access control validation, for roles, permissions, and so on.</li><li>Response structure validation is an integration test for having valid and expected fields returned within your API.</li><li>and finally, when testing the actual data, is when we validate our business logic and the results are the same as expected.</li></ul><p>We can refactor this into 3 different test cases.</p><p>First, we separate the response code test.</p><pre>/**<br> * This test only validates that ADMIN role can access the API.<br> * @test<br> */<br>public function api_offers_when_with_admin_role_then_response_code_is_200(): void<br>{<br>    (new Offer(isActive: true, updatedAt: new DateTime()))-&gt;count(3)<br>        -&gt;create();<br><br>    // act<br>    $this-&gt;actingAs(&#39;admin&#39;);<br>    $response = $this-&gt;getJson(&#39;/api/offers&#39;);<br><br>    // assert<br>    $response-&gt;assertStatus(200);<br>}</pre><p>Then a separate test for structure.</p><pre>/**<br> * This test only validates API resource expected structure for ADMIN role.<br> * @test<br> */<br>public function api_offers_when_with_admin_role_then_response_structure_same_as_expected(): void<br>{<br>    (new Offer(isActive: true, updatedAt: new DateTime()))-&gt;count(3)<br>        -&gt;create();<br><br>    // act<br>    $this-&gt;actingAs(&#39;admin&#39;);<br>    $response = $this-&gt;getJson(&#39;/api/offers&#39;);<br><br>    // assert<br>    $response-&gt;assertJsonStructure([<br>        &#39;data&#39; =&gt; [<br>            [<br>                &#39;count&#39;,<br>                &#39;price&#39;,<br>                &#39;isActive&#39;,<br>                &#39;updatedAt&#39;,<br>            ],<br>        ],<br>    ]);<br>}</pre><p>And finally, a test for the business logic.</p><pre>/**<br> * This test only validates API has correct data for ADMIN role.<br> * @test<br> */<br>public function api_offers_when_with_admin_role_then_response_has_expected_offer_count(): void<br>{<br>    (new Offer(isActive: true, updatedAt: new DateTime()))-&gt;count(3)<br>        -&gt;create();<br><br>    // act<br>    $this-&gt;actingAs(&#39;admin&#39;);<br>    $response = $this-&gt;getJson(&#39;/api/offers&#39;);<br><br>    // assert<br>    $data = $response-&gt;decodeResponseJson()-&gt;json();<br>    $this-&gt;assertSame(3, $data[&#39;count&#39;]);<br>}</pre><p>In an application, you would also have all those tests under different files and namespaces, that represent their use cases.</p><h4>Use Case Coverage vs. Statement Coverage</h4><p>There are different forms of test coverage and as software engineers, we aim to reach high coverage in:</p><ul><li>Statement coverage</li><li>Branch coverage</li><li>Code coverage</li></ul><p>Those are all related to lines of code directly and are helpful. You can easily have 100% coverage at the start of the project, but eventually, it’ll degrade. You most likely will need more time or resources to keep up with a growing code base.</p><p>Don’t be upset tho. I suggest not aiming for 100% statement, branch, or code coverage at all.</p><p>Try focusing on the USE CASE coverage instead.</p><p>To show you how this concept is different from just statement coverage — let’s go over a quick example here.</p><p>I have a DataService that will accept the user’s role as input and will provide a response, success or failure.</p><pre>class DataService<br>{<br>    public function getWithRole(Role $role): array<br>    {<br>        if ($role-&gt;value === Role::Admin-&gt;value) {<br>            return [<br>                &#39;status&#39; =&gt; &#39;success&#39;,<br>            ];<br>        }<br><br>        return [<br>            &#39;status&#39; =&gt; &#39;fail&#39;,<br>        ];<br>    }<br>}</pre><p>I made some test coverage based on user roles.</p><pre>/** @test */<br>public function getWithRole_when_admin_role_then_status_is_success(): void<br>{<br>    $service = new DataService();<br>    $data = $service-&gt;getWithRole(Role::Admin);<br><br>    // assert<br>    $this-&gt;assertSame(&#39;success&#39;, $data[&#39;status&#39;]);<br>}<br><br>/** @test */<br>public function getWithRole_when_user_role_then_status_is_fail(): void<br>{<br>    $service = new DataService();<br>    $data = $service-&gt;getWithRole(Role::User);<br><br>    // assert<br>    $this-&gt;assertSame(&#39;fail&#39;, $data[&#39;status&#39;]);<br>}</pre><p>If the user is an admin, I expect a “success” response. If just a user role, then the “fail” response. If I run PHPUnit with a coverage report I will get 100% coverage on all metrics.</p><figure><img alt="Code coverage report for Laravel test suite." src="https://cdn-images-1.medium.com/max/1024/1*XgNFeITvpQRgA0Xqy6ZoWw.png" /><figcaption>Code coverage report for Laravel test suite.</figcaption></figure><p>Now let me show you the Role enum, and you can see here we have 1 role that was not used in our scenarios. And PHPUnit can easily miss that.</p><pre>enum Role: string<br>{<br>    case Admin = &#39;admin&#39;;<br>    case User = &#39;user&#39;;<br>    case Guest = &#39;guest&#39;; // TODO: this role USE CASE is not covered<br>}</pre><p>This is a simplified example, the amount of files in my demo project is small, and so on, but in a large project, it would be even easier to miss such things.</p><p>We can see that we have a 100% coverage report, but I only used 2 roles out of 3. I’m not confident in that code anymore, since I don’t see how it is expected to behave with a <strong>guest</strong> role.</p><p>Let’s fix that by forming data provider arrays for each of our tests and focusing on use case coverage.</p><pre>public static function allowedRoleProvider(): array<br>{<br>    return [<br>        [Role::Admin],<br>    ];<br>}<br><br>public static function forbiddenRoleProvider(): array<br>{<br>    return [<br>        [Role::User],<br>        [Role::Guest],<br>    ];<br>}</pre><p>Now I can write my test for allowed roles:</p><pre>/**<br> * @test<br> * @dataProvider allowedRoleProvider<br> */<br>public function getWithRole_when_allowed_role_then_status_is_success(Role $role): void<br>{<br>    $service = new DataService();<br>    $data = $service-&gt;getWithRole($role);<br><br>    // assert<br>    $this-&gt;assertSame(&#39;success&#39;, $data[&#39;status&#39;]);<br>}</pre><p>And forbidden roles:</p><pre>/**<br> * @test<br> * @dataProvider forbiddenRoleProvider<br> */<br>public function getWithRole_when_forbidden_role_then_status_is_fail(Role $role): void<br>{<br>    $service = new DataService();<br>    $data = $service-&gt;getWithRole($role);<br><br>    // assert<br>    $this-&gt;assertSame(&#39;fail&#39;, $data[&#39;status&#39;]);<br>}</pre><p>We still have 100% coverage but we also achieved 100% use case coverage for that feature.</p><figure><img alt="Code coverage report for Laravel test suite after refactoring." src="https://cdn-images-1.medium.com/max/1024/1*l_W4pxA89bPPAugAOWBaxw.png" /><figcaption>Code coverage report for Laravel test suite after refactoring.</figcaption></figure><p>I suggest always aiming to implement USE CASE coverage. Here are some guidelines on how you can come up with good use cases:</p><ul><li>First to test the happy paths.</li><li>Then test — bad inputs.</li><li>Then any other limitations by task requirements.</li><li>And finally, think of possible edge cases.</li></ul><p>This last step is what will separate great engineers from just the good ones.</p><p><strong>Bonus point — “setUp()” method</strong></p><p>This one is specific to Laravel and PHP.</p><p>You MUST NOT create models, mocks, or faking Events, Notifications, etc. inside the setUp(): void method.</p><p>Consider other developers adding tests to files you created, they would have to deal with the overhead of created models they do not want to use (.i.e. to make a count() assert against the same model).</p><p>Good:</p><pre>public function my_test(): void<br>{<br>  $user = factory(User::class)-&gt;create();<br>  $restroom = factory(Restroom::class)-&gt;create();<br>  $secret = str_random(40);<br>  config()-&gt;set(&#39;app.api_secret&#39;, $secret);<br>  <br>  // your test case...<br>}</pre><p>Bad:</p><pre>public function setUp(): void<br>{<br>  parent::setUp();<br>  $this-&gt;user = factory(User::class)-&gt;create();<br>  $this-&gt;restroom = factory(Restroom::class)-&gt;create();<br>  $this-&gt;secret = str_random(40);<br>  config()-&gt;set(&#39;app.api_secret&#39;, $this-&gt;secret);<br>}<br><br>public function my_test(): void<br>{<br>  // your test case...<br>}</pre><p>Also, making setUp() as minimal as possible makes all dependencies explicitly visible in each test.</p><h4>Conclusion</h4><p>I reviewed the following points for creating a great test suite that can stand the test of time.</p><ul><li><strong>Test Suite Folder Structure</strong> — adhere to PSR-4 standard and make paths obvious to other devs, possibly fully replicating the App folder structure.</li><li><strong>Test Case Naming</strong> — have a pattern for naming and stick with it, your test name should give an idea of what it does exactly.</li><li><strong>Test Case Flow </strong>— there are only a few options for structuring test cases and you can follow them and simplify your day-to-day work even more.</li><li><strong>Clear assertions</strong> — keep SRP in mind when writing your assertions.</li><li><strong>Use Case Coverage vs. Statement Coverage</strong> — aim for use case coverage.</li></ul><p>I hope this was helpful. Good luck, and happy engineering!</p><p>If you are a visual learner, check out my <a href="https://www.udemy.com/course/laravel-10-unit-feature-test-advanced-practices-2023/">free Udemy course</a> that covers Laravel Testing topics.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=38b143dbccce" width="1" height="1" alt=""><hr><p><a href="https://levelup.gitconnected.com/laravel-test-suite-best-practices-38b143dbccce">Laravel Test Suite — Best Practices</a> was originally published in <a href="https://levelup.gitconnected.com">Level Up Coding</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Build Scalable Event-Driven Applications With Nest.js]]></title>
            <link>https://medium.com/better-programming/build-scalable-event-driven-applications-with-nest-js-28676cb093d0?source=rss-eb0dbd32d4c4------2</link>
            <guid isPermaLink="false">https://medium.com/p/28676cb093d0</guid>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[nodejs]]></category>
            <category><![CDATA[web-development]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[javascript]]></category>
            <dc:creator><![CDATA[Dmitry Khorev]]></dc:creator>
            <pubDate>Mon, 28 Nov 2022 15:44:40 GMT</pubDate>
            <atom:updated>2023-04-01T07:49:01.814Z</atom:updated>
            <content:encoded><![CDATA[<h4>We’ll explore a hands-on example of scalability issues that can happen and the common approaches to solving them.</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*ENfamkyhtCGebF7r" /><figcaption>Photo by <a href="https://unsplash.com/@6heinz3r?utm_source=medium&amp;utm_medium=referral">Gabriel Heinzer</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>In this article, I want to chat about elements of scalable event-driven applications available to developers with the Nest.js framework. I will demonstrate how easy it is to get going with a modern framework for building backend Node.js applications.</p><pre><strong>Agenda</strong><br><br>What is Nest.js?<br><br><a href="#1fc3">How Does Nest.js Help Build Highly-Scalable Apps?</a><br><br><a href="#e49d">Demo App and Tools</a><br><br><a href="#7525">Demo App in Action</a></pre><p>I want to briefly write about what is Nest.js and how does it help build scalable applications? I have a demo ready for you. We will describe the overall architecture and the tools used, then run and see our demo in action.</p><h3><strong>What is </strong>Nest.js<strong>?</strong></h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/1*kdw5YjRIGsFESBo93PtGBQ.png" /><figcaption>Nest.js — a modern framework for building back-end Node.js applications.</figcaption></figure><p>It’s a framework for building Node.js applications.</p><p>It was inspired by Angular and relies heavily on TypeScript.</p><p>So it provides a somewhat type-safe development experience. It’s still JavaScript after transpiling, so you should care when dealing with common security risks.</p><p>It is a rather popular framework already, and you have probably heard about it.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/811/1*d2tRGA7liVBxLJRVyf8I8g.png" /><figcaption>GitHub stars as of 2022–11</figcaption></figure><h4>Why use another framework?</h4><ul><li>Dependency injection</li><li>Abstracted integration with databases</li><li>Abstracted common use cases: caching, config, API versioning and documentation, task scheduling, queues, logging, cookies, events, and sessions, request validation, HTTP server (Express or Fastify), auth.</li><li>TypeScript (and decorators)</li><li>Other design elements for great applications: Middleware, Exception filters, Guards, Pipes, and so on.</li><li>And some more, which I will talk about later</li></ul><p>Let’s quickly recap what the framework offers us.</p><p>One of the main advantages of using a framework is having a dependency injection. It removes the overhead of creating and supporting a class dependency tree.</p><p>It has abstract integration with most databases, so you don’t have to think about it. Some of the most developed and popular packages supported are mongoose, TypeORM, MikroORM, and Prisma.</p><p>It has abstracted common use cases for web development like caching, configuration, API versioning and documentation, queues, etc.</p><p>For the HTTP server, you can choose between Express or Fastify.</p><p>It uses TypeScript and decorators. It simplifies reading code, especially in bigger projects, and allows the developers’ team to be on the same page when reasoning about components.</p><p>Also, as with any framework, it provides other application design elements like middleware, exception filters, guards, pipes, and so on.</p><p>And finally, we’ll talk later about some other features that are specific to scalability.</p><h3>How Does Nest.js Help Build Highly-Scalable Apps?</h3><p>Let’s first recap the main strategies for building highly scalable applications.</p><p>Here are the options:</p><ul><li>Monolith (modular)</li><li>Microservices</li><li>Event-driven</li><li>Mixed</li></ul><blockquote>Software development is all about trade-offs.</blockquote><p>The first approach I want to talk about is using monolith.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/904/1*kRsDEPu4LUS2TFIuFSz0nQ.png" /><figcaption>An example of monolith Nest.js project architecture.</figcaption></figure><p>It’s a single application that has components tightly coupled.</p><p>They are deployed together, supported together, and usually, they can’t live without one another.</p><p>If you write your application that way, it’s best to use a modular approach, which Nest.js is very good at.</p><p>When using the modular approach, you can effectively have one codebase, but components of your system act as somewhat independent entities and can be worked on by different teams. This becomes harder as your team and project grow. That’s why we have other models for architecture development.</p><h4><strong>Microservices</strong></h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*c37PJRceIRArQLN9a5BzuQ.png" /><figcaption>An example of a microservice Nest.js project architecture.</figcaption></figure><p>Microservices are when you have separate deploys for each service. Usually, each service is only responsible for a small unit of work and will have its store.</p><p>The event-driven approach is similar to microservices.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*WYW57oq4-E99rn6JI-X3PA.png" /><figcaption>An example of event-driven Nest.js project architecture.</figcaption></figure><p>Now, you don’t have direct communication between services. Instead, each service will emit an event, and then it doesn’t care.</p><p>There can be listeners to this event, but there can be no listeners. If someone consumes the event, it can again produce another event that another service can consume, and so on.</p><p>Eventually, someone will produce a response for the client waiting. It could be a WebSocket response or webhook or whatever.</p><p>Services will communicate with other services via HTTP requests or messaging.</p><h4><strong>Mixed architecture</strong></h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ninrIGorgKZZv6XiHgQc0g.png" /><figcaption>An example of mixed Nest.js project architecture.</figcaption></figure><p>Usually, our larger projects are a mix of all designs — some components are tightly coupled and deployed together, some components are deployed separately, and some are communicating exclusively via event messaging.</p><h3>Nest.js = Easy Event-Driven Application Development</h3><p>Let’s think about why this framework simplifies event-driven development.</p><ul><li>Integrates with Redis/Bull for queue management (<a href="https://github.com/OptimalBits/bull">github.com/OptimalBits/bull</a>)</li><li>Integrates with most messaging brokers</li><li>Promotes modular development</li><li>Great documentation and examples</li><li>Unit and integration testing is bootstrapped (DI, Jest)</li></ul><p>First, it allows fast and simple integration of the popular Bull package for queues.</p><p>For microservices development and communication, it has integrations with the most popular messaging brokers like Redis, Kafka, RabbitMQ, MQTT, NATS, and others.</p><p>Third, it promotes modular development, so it’s naturally easy for you to extract single units of work later in the project’s life cycle.</p><p>My next point is that it has great documentation and examples, which is always nice. You can be running your first distributed app in minutes.</p><p>And another thing I want to note is unit and integration testing is bootstrapped for you. It has DI for testing and all other powerful features of the Jest testing framework.</p><h3><strong>Queues </strong>(npm/bull)</h3><p>Now, let’s see how a simple queue can be created in NestJS.</p><h4><strong>Queues: adding the connection</strong></h4><p>First, you install the required dependencies with the following command:</p><pre>npm install --save @nestjs/bull bull<br>npm install --save-dev @types/bull</pre><p>Then you create a connection to Redis.</p><blockquote>An example of Nest.js connection to Redis with Bull.</blockquote><pre>BullModule.forRootAsync({<br>  imports: [ConfigModule],<br>  useFactory: async (configService: ConfigService) =&gt; ({<br>    redis: {<br>      host: configService.get(&#39;REDIS_HOST&#39;) || &#39;127.0.0.1&#39;,<br>      port: +configService.get(&#39;REDIS_PORT&#39;) || 6379,<br>      password: configService.get(&#39;REDIS_PASSWORD&#39;) || undefined,<br>    },<br>  }),<br>  inject: [ConfigService],<br>}),</pre><p>And finally, register a queue.</p><blockquote>An example of Nest.js queue registering with Bull.</blockquote><pre>BullModule.registerQueue({<br>  name: TRADES,<br>}),</pre><h4>Queues: <strong>event producer injects a queue</strong></h4><blockquote>An example of Nest.js emitting events with Bull.</blockquote><pre>export class TradeService {<br>  constructor(@InjectQueue(TRADES) private queue: Queue) {}<br><br>  async add() {<br>    const uuid = randomUUID();<br><br>    await this.queue.add({ uuid });<br>  }<br>}</pre><p>Next, somewhere else in a service constructor, you type-hint your queue, and it gets injected by the Dependency Injection container — you now have full access to the queue and can start emitting events.</p><h4>Queues: <strong>event consumer processes the queue</strong></h4><blockquote>An example of Nest.js consuming events with Bull.</blockquote><pre>@Processor(TRADES)<br>export class TradeService {<br>  @Process()<br>  async process(job: Job&lt;TradeCreatedDto&gt;) {<br>    // ...<br>  }<br>}</pre><p>Somewhere in another module, you decorate your processor class with Processor() and Process() a minimal setup to have a queue system working.</p><p>You can have producers and consumers exist in one application or separately. They will be communicating via your message broker of choice.</p><h3><strong>Messaging Integration — Connection</strong></h3><p>Message provider connection starts with adding a client module connection. In this example, we have Redis transport and should provide Redis-specific connection options.</p><blockquote>An example of Nest.js registering messaging client module with Redis.</blockquote><pre>@Module({<br>  imports: [<br>    ClientsModule.register([<br>      {<br>        name: &#39;MATH_SERVICE&#39;,<br>        transport: Transport.REDIS,<br>        options: {<br>          host: &#39;localhost&#39;,<br>          port: 6379<br>        }<br>      },<br>    ]),<br>  ]<br>  ...<br>})</pre><h3><strong>Messaging Integration — Producer</strong></h3><p>The next step is to inject the client proxy interface into our producer service.</p><blockquote>An example of Nest.js injecting messaging client modules into a service class.</blockquote><pre>constructor(<br>  @Inject(&#39;MATH_SERVICE&#39;) private client: ClientProxy,<br>) {}</pre><p>Our options further are either SEND method or EMIT.</p><p>SEND is usually a synchronous action, similar to an HTTP request, but is abstracted by the framework to act via selected transport.</p><p>In the example below the accumulate() method response will not be sent to the client until the message is processed by the listener application.</p><blockquote>An example of Nest.js sending messages to remote service via a messaging broker.</blockquote><pre>accumulate(): Observable&lt;number&gt; {<br>  const pattern = { cmd: &#39;sum&#39; };<br>  const payload = [1, 2, 3];<br>  return this.client.send&lt;number&gt;(pattern, payload);<br>}</pre><p>EMIT command is an asynchronous workflow start, it will act as fire and forget OR in some transports, this will act as a durable queue event. This will depend on the transport chosen and its configuration.</p><blockquote>An example of Nest.js emitting messages to remote service via a messaging broker.</blockquote><pre>async publish() {<br>  this.client.emit&lt;number&gt;(&#39;user_created&#39;, new UserCreatedEvent());<br>}</pre><p>SEND and EMIT patterns have slightly different use cases on the CONSUMER side. Let’s see.</p><h3><strong>Messaging Integration — Consumer</strong></h3><p>MessagePattern decorator is only for sync-alike methods (produced with the SEND command) and can only be used inside a controller-decorated class.</p><p>So we expect some response to the request received via our messaging protocol.</p><blockquote>An example of Nest.js responding to remote service via a messaging broker.</blockquote><pre>@Controller()<br>export class MathController {<br>  @MessagePattern({ cmd: &#39;sum&#39; })<br>  accumulate(data: number[]): number {<br>    return (data || []).reduce((a, b) =&gt; a + b);<br>  }<br>}</pre><p>On the other hand, EventPattern decorator can be used in any custom class of your application and will listen to events produced on the same queue OR event bus, and it does not expect our application to return something.</p><blockquote>An example of Nest.js processing a message from a remote service via a messaging broker.</blockquote><pre>@EventPattern(&#39;user_created&#39;)<br>async handleUserCreated(data: Record&lt;string, unknown&gt;) {<br>  // business logic<br>}</pre><p>This setup is similar to other messaging brokers. And if it’s something custom, you can still use a DI container and create a custom event subsystem provider with Nest.js interfaces.</p><blockquote>MQTT and NATS examples of consumers for Nest.js.</blockquote><pre>// MQTT<br>@MessagePattern(&#39;notifications&#39;)<br>getNotifications(@Payload() data: number[], @Ctx() context: MqttContext) {<br>  console.log(`Topic: ${context.getTopic()}`);<br>}<br><br>// NATS<br>@MessagePattern(&#39;notifications&#39;)<br>getNotifications(@Payload() data: number[], @Ctx() context: NatsContext) {<br>  console.log(`Subject: ${context.getSubject()}`);<br>}</pre><blockquote>RabbitMQ and Kafka examples of consumers for Nest.js.</blockquote><pre>// RabbitMQ<br>@MessagePattern(&#39;notifications&#39;)<br>getNotifications(@Payload() data: number[], @Ctx() context: RmqContext) {<br>  console.log(`Pattern: ${context.getPattern()}`);<br>}<br><br>// Kafka<br>@MessagePattern(&#39;hero.kill.dragon&#39;)<br>killDragon(@Payload() message: KillDragonMessage, @Ctx() context: KafkaContext) {<br>  console.log(`Topic: ${context.getTopic()}`);<br>}</pre><p>This is how easy it is to integrate with most common messaging brokers using Nest.js abstractions.</p><h3><strong>Demo App and Tools</strong></h3><p>Available at the following link <a href="https://github.com/dkhorev/conf42-event-driven-nestjs-demo">github.com/dkhorev/conf42-event-driven-nestjs-demo</a>.</p><p>In this section, I will review a part of a real application (simplified, of course). You can get the source code at my GitHub page to follow along or try it out later. I will demonstrate how properly designed EDA can face challenges and how we can quickly resolve them with the framework&#39;s tools.</p><h3><strong>Demo app overview</strong></h3><p>Let’s first do a quick overview. Our expected workflow is like this:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/814/1*XphsQyt_wsZcXVsi4BY14w.png" /><figcaption>Nest.js event-driven application demo overview.</figcaption></figure><p>We have an action that has happened in our API gateway, and it touches the trade service, which emits an event.</p><p>This event goes to the queue or event bus. And then, we have four other services listening to it and processing it.</p><p>To observe how this application performs, I use a side application which is my “channel monitor.” This is a powerful pattern to improve observability and can help automate scaling up and down based on channel metrics.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*_tSNosXFtWnPRmuYNSgZoA.png" /><figcaption>Nest.js event-driven application with channel monitor demo overview.</figcaption></figure><p>I’ll show you how it works in a bit.</p><h3>Demo App in Action — Normal Conditions</h3><p>I prepared a Makefile so you can follow along.</p><p>First, run a make start command that will start docker with all required services. Next, run a make monitor command to peek into application metrics.</p><p>The monitor shows me the queue name, the count of waiting jobs, the count of processed jobs, and the number of worker instances online.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/716/1*xq1_rQOgn-P3i832msmfmQ.gif" /><figcaption>Demo app in action — normal conditions.</figcaption></figure><p>As you can see, under normal conditions, the jobs_waiting count is zero, the event flow is slow, and we don’t have any jobs piling up.</p><p>This application works fine with a low event count. But what happens if traffic suddenly increases?</p><h3>Demo App in Action — Traffic Spike</h3><p>You can start this demo by running the make start-issue1 command and restarting the monitor with the make monitor command. Our event flow is increased by three times.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/632/1*5t8uoqLzchMVpJ94QNDVzA.png" /><figcaption>Nest.js event-driven application demo with increased traffic.</figcaption></figure><p>You will notice eventually in the monitor app that the jobs_waiting count will start to increase, and while we still are processing jobs with one worker, the queue has already slowed down compared to the increased traffic.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/716/1*LKzvs5tXDDjjDZvnSrL0-Q.gif" /><figcaption>Demo app in action —traffic spike.</figcaption></figure><p>Now we can see that this throttles our mission-critical trade service confirmation.</p><p>The worker would process all events without priority, so each new trade confirmation must first wait for some over events to complete.</p><p>You can imagine this creating slower response times on our front-end client applications for trade processing.</p><h3><strong>Solutions?</strong></h3><p>Let’s explore the options we have to fix this:</p><ul><li>Scale the worker instance so it will process the queue faster</li><li>Increase the worker instance count</li><li>Application optimizations</li><li>Separate the queues</li><li>Prioritize events</li></ul><p>The first and most obvious is to scale the worker instance so it will go faster. In the Node.js world, this is rarely a good solution unless you are processing high CPU-intense tasks such as video, audio, or cryptography.</p><p>The second is to increase the worker instance count. This is a valid option but sometimes not very cost-effective.</p><p>Next, we can think about application optimizations, including profiling, investigating database queries, and similar activities. This can be time-consuming and render no result or very limited improvements.</p><p>Our last two options are where Nest.js can help us with. It’s to separate the queues and prioritize some events.</p><h3>Step 1 — Separate the Queues</h3><p>I will start by applying a queue separation method.</p><p>The trade queue will only be responsible for processing trade confirmation events.</p><p>My code for this will look like this:</p><pre>this.queue.add(JOB_ANALYTICS, { uuid });<br>this.queue.add(JOB_NOTIFICATION, { uuid });<br>this.queue.add(JOB_STORE, { uuid });<br>// this.queue.add(JOB_TRADE_CONFIRM, { uuid });<br>this.queueTrades.add(JOB_TRADE_CONFIRM, { uuid });</pre><p>The first step is to ask our PRODUCER to emit a TRADE CONFIRM event to a new queue - TRADES.</p><p>On the consumer side, I extracted a new class called TradesService and assigned it as a listener to the TRADES queue.</p><pre>@Processor(QUEUE_TRADES)<br>export class TradesService {<br>  protected readonly logger = new Logger(this.constructor.name);<br><br>  @Process({ name: &#39;*&#39; })<br>  async process(job: Job&lt;TradeCreatedDto&gt;) {<br>    // ...<br>  }<br>}</pre><p>The QUEUE DEFAULT listener service stays the same. I don’t have to make any changes here.</p><pre>@Processor(QUEUE_DEFAULT)<br>export class DefaultService {<br>  protected readonly logger = new Logger(this.constructor.name);<br><br>  @Process({ name: &#39;*&#39; })<br>  async process(job: Job&lt;TradeCreatedDto&gt;) {<br>    // ...<br>  }<br>}</pre><p>Now, whatever happens, whatever spike we have — the trades will never stop processing (they’ll slow down but will not wait for unimportant events).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*10WxVaoezUCuT2DCU14RKA.png" /><figcaption>Nest.js event-driven application demo with a separate Trade queue.</figcaption></figure><p>You can run this example with the start-step1 command and restart the monitor.</p><p>You will notice that the trades queue has a jobs_waiting count of zero, but the default queue is still experiencing problems.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/716/1*WAXAB51UhSx72sU_wet1Eg.gif" /><figcaption>Demo app in action —trades queue is separate and is fixed.</figcaption></figure><p>And now, I will apply our second step for scaling based on the information I have, I increase the worker instance count to 3 for the DEFAULT QUEUE only.</p><h3><strong>Step 2 — Scale Workers</strong></h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*d2GJopwK-Hwp13pdjLOHzQ.png" /><figcaption>Nest.js event-driven application demo with a separate Trade queue and increased default queue workers to 3.</figcaption></figure><p>You can start this demo by running the start-step2 command and restarting the monitor. Over time, this application goes to zero jobs_waiting on both queues, so good job!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/716/1*wrXhcYuP9-Hb_RNifyx_ZQ.gif" /><figcaption>Demo app in action — application is stable.</figcaption></figure><p>As you can understand, my example is a bit contrived and is mostly for demo purposes. You can easily see tho how we can leverage channel monitor patterns to programmatically react to our app performance changes by scaling up or down separate queue workers.</p><h3>Solutions — Recap</h3><p>Let’s recap. I applied the following solutions here from my list:</p><ul><li>Scale the worker instance so it will process the queue faster</li><li>Increase the worker instance count</li><li>Application optimizations</li><li>Separate queues</li><li>Prioritize events</li></ul><p>Created a separate TRADES queue that also automatically prioritized those events over others.</p><p>Next, I increased the worker instance count for the DEFAULT QUEUE to 3.</p><p>All of this was majorly done for me by Docker and the Nest.js framework.</p><p>The next step you can implement by just using the framework&#39;s tools is prioritizing some other events over others. For example, anything related to logging or internal metrics can be delayed in favor of more mission-critical events like DB interactions, notifications, etc.</p><p>The repository with the test code is here: <a href="https://github.com/dkhorev/conf42-event-driven-nestjs-demo">github.com/dkhorev/conf42-event-driven-nestjs-demo</a>.</p><p>For containers and modular development, I use a Container Role Pattern described <a href="https://medium.com/@dkhorev/docker-container-roles-pattern-for-nestjs-apps-ca8b07a08a9a">at this link</a>.</p><p>I hope this was helpful. Good luck, and happy engineering!</p><p>More interesting Nest.js reads:</p><ul><li><a href="https://betterprogramming.pub/validating-complex-requests-with-nestjs-a-practical-example-b55c287f7c99">Validating Complex Requests With Nest.js</a></li><li><a href="https://betterprogramming.pub/improve-response-time-10x-by-introducing-an-interceptor-in-nestjs-590695692360">Improve Response Time by 10x by Introducing an Interceptor In Nest.js</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=28676cb093d0" width="1" height="1" alt=""><hr><p><a href="https://medium.com/better-programming/build-scalable-event-driven-applications-with-nest-js-28676cb093d0">Build Scalable Event-Driven Applications With Nest.js</a> was originally published in <a href="https://betterprogramming.pub">Better Programming</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Validating Complex Requests With Nest.js]]></title>
            <link>https://medium.com/better-programming/validating-complex-requests-with-nestjs-a-practical-example-b55c287f7c99?source=rss-eb0dbd32d4c4------2</link>
            <guid isPermaLink="false">https://medium.com/p/b55c287f7c99</guid>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[nodejs]]></category>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[web-development]]></category>
            <dc:creator><![CDATA[Dmitry Khorev]]></dc:creator>
            <pubDate>Sat, 08 Oct 2022 18:08:09 GMT</pubDate>
            <atom:updated>2022-10-10T16:57:39.396Z</atom:updated>
            <content:encoded><![CDATA[<h3>Validating Complex Requests With NestJS</h3><h4>A practical example</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/640/1*zKHU8ZH_EBvKbQRtIsKXgA.jpeg" /><figcaption>Photo by <a href="https://unsplash.com/@juanjodev02?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Juanjo Jaramillo</a> on <a href="https://unsplash.com/s/photos/programming?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></figcaption></figure><p>In this article, I want to make a deep dive and explore ways to perform complex validation for incoming requests with NestJS.</p><p>NestJS provides a great way to integrate request validation into your app with some good defaults. Its advice is to use a class-validator package that is powerful and has nice documentation and examples.</p><h3>Why Is Request Validation Important?</h3><p>Validation is an important step in online service security and data integrity.</p><p>It ensures you only receive data in a format that your service expects. Possibility to discard any additional data that you don’t expect.</p><p>It provides a layer of protection against malicious actors, remember you should not trust any user input, ever. Employ a zero-trust policy as much as possible.</p><p>Don’t store invalid data in the database, protect services down the pipeline from invalid data input.</p><h3>What Is a “Class-Validator” Package?</h3><p><a href="https://github.com/typestack/class-validator">GitHub - typestack/class-validator: Decorator-based property validation for classes.</a></p><p>It is a set of decorators that you can use with your JavaScript class properties to add validation. There are basic validation rules like @IsString() — validated field should be a string, and the ability to write fully custom validation classes and decorators.</p><p>Another good thing this package allows is the usage of the Dependency Injection container from the parent app. This is very useful for validating with external sources, for example, a User database. There’s one small trick to make it work, and save you some time googling. Check out this main.ts file:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/a1a8cf5155b4b4eb78074bd0c733345f/href">https://medium.com/media/a1a8cf5155b4b4eb78074bd0c733345f/href</a></iframe><p>The most important line here is:</p><blockquote>useContainer(app.select(AppModule), { fallbackOnErrors: true });</blockquote><p>This is not mentioned in the official NestJS documentation at the moment of writing this.</p><h3>What Is a Complex Request?</h3><p>I will define it as something that will have at least 1 array or 1 object, that will in turn include nested objects and/or arrays. Another common use case is checking validity against an external resource (DB, cache, S3, etc.).</p><p>For my example I will use an imaginary SaaS multi-tenant e-commerce API, that will allow me to:</p><ul><li>register a user globally</li><li>place an order for a specific shop</li></ul><h3>Validating a User Create Request</h3><p>To register the user, we’ll require the fields listed in the class below:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/8cb1e86a01d1103832d7617fac0f3fef/href">https://medium.com/media/8cb1e86a01d1103832d7617fac0f3fef/href</a></iframe><p>Our constraints are as follows:</p><ul><li>name — required, is a string, should be at least three characters long.</li><li>email— required, should be a valid email string, should not be already registered in our database.</li><li>password — required, is a string, should be at least 8 characters long.</li></ul><p>This is what my UserCreateDto will look like after adding class-validator decorators.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/1430312e704bf07dcaa21bb4952dde0d/href">https://medium.com/media/1430312e704bf07dcaa21bb4952dde0d/href</a></iframe><p>Let’s first review standard validation rules from the class-validator package used here:</p><ul><li>@IsString() —value should be a string</li><li>@MinLength(N) —value should have a min length of N</li><li>@IsEmail() — value should be a valid email (syntax check only)</li></ul><p>Now the interesting part is the @EmailNotRegistered() validation. This one I created to deny registering users if their email already exists in our app.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/62d027bc2c3173aa62d934d9007d6c0e/href">https://medium.com/media/62d027bc2c3173aa62d934d9007d6c0e/href</a></iframe><p>This is my custom validation decorator, the process of creating one is described in <a href="https://github.com/typestack/class-validator#custom-validation-decorators">the docs</a>. However most of this code is boilerplate, let’s check the most interesting lines.</p><p>Line 11 — injects a custom provider private readonly userRepository: UserRepository.</p><p>Lines 14–15 — provide a validation function by searching the email in the UserRepository.</p><p>My UserRepository is just an in-memory store, that emulates some form of async network I/O. This is to replicate the real DB request/response lifecycle.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/d09189956e908a070bf10e7c38fef77a/href">https://medium.com/media/d09189956e908a070bf10e7c38fef77a/href</a></iframe><p>The user register API endpoint looks like this:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/49eee1a4ecc2d8abb670b28a4192b5c1/href">https://medium.com/media/49eee1a4ecc2d8abb670b28a4192b5c1/href</a></iframe><p>This is all you have to do to make NestJS validate incoming requests:</p><ul><li>type-hint a DTO class in your route’s signature</li><li>add validation decorators to your DTO class</li><li>turn on validation usage within main.ts (example shown above)</li></ul><figure><img alt="User register endpoint validation in action." src="https://cdn-images-1.medium.com/max/800/1*PDqKIetDOLdSljc3XdN84g.gif" /><figcaption>User register endpoint validation in action.</figcaption></figure><p>Nicely done. Let’s move on to the order request and validation!</p><h3>Validating a Create Order Request</h3><p>Okay, we’ve started with something rather simple, now let’s try to validate an order request.</p><p>Our requirements for the OrderCreateDto are:</p><ul><li>should have common order properties, like a shop ID (we’re SaaS multi-tenant app), and the date when created.</li><li>should have customer information (customer email).</li><li>should have an order products list (an array), with a common structure — product ID, and quantity.</li><li>should have an order shipment description, which is a variable object: either Delivery or Pickup type, with various required fields.</li><li>finally, it should have a contact person list, with some details for each contact person.</li></ul><p>The first, non-validated DTO that meets such requirements may look like this:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/c31a9608357e7d68d958603b6f77725f/href">https://medium.com/media/c31a9608357e7d68d958603b6f77725f/href</a></iframe><p>Let’s go over each block of this request, describe it, and add validations.</p><h3>OrderCreateDto</h3><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/cf8b3786a8b32d4c996239e4f3f40b6d/href">https://medium.com/media/cf8b3786a8b32d4c996239e4f3f40b6d/href</a></iframe><p>Here are the constraints:</p><ul><li>shop_id — should be in UUID format and exist in ShopRepository.</li><li>created_at — should be a date object or date castable.</li></ul><h4>Solution</h4><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/759d86f6ba172c4f6dfc09cbe48f4a4d/href">https://medium.com/media/759d86f6ba172c4f6dfc09cbe48f4a4d/href</a></iframe><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/4216f5316d82d03161c5c6af51f73bad/href">https://medium.com/media/4216f5316d82d03161c5c6af51f73bad/href</a></iframe><p>@IsUUID() — is a decorator for checking a string to be a valid UUID.</p><p>@Validate(ShopIdExistsRule) — validates the field with a custom rule (similar to writing your decorators, but a bit simpler).</p><p>@Type(() =&gt; Date) &amp; @IsDate() — will try and cast provided input to Date and validate it’s a Date object.</p><figure><img alt="Order create validation in action." src="https://cdn-images-1.medium.com/max/800/1*y200WKW0X9M0muVIWBM_gQ.gif" /><figcaption>Order create validation in action.</figcaption></figure><h3>OrderCustomerDto</h3><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/29db5bb9c338a7b3c1bad44702b5032b/href">https://medium.com/media/29db5bb9c338a7b3c1bad44702b5032b/href</a></iframe><p>Here are the constraints:</p><ul><li>email — should exist in the user repository, and should be a valid email string.</li><li>validate a nested object of OrderCustomerDto inside parent OrderCreateDto.</li></ul><h4>Solution</h4><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/dc15e5433e36983bc000ebfbb276884c/href">https://medium.com/media/dc15e5433e36983bc000ebfbb276884c/href</a></iframe><p>New things here:</p><p>@CustomerExists() — this is the opposite of the @EmailNotRegistered() rule I have used before. Logic is similar and we take advance of injected UserRepository.</p><p>@Type(() =&gt; OrderCustomerDto) — this is a utility line, that transforms the nested object into a class, so it can be validated by following @ValidateNested(). If you don’t transform — validation will not run.</p><figure><img alt="Customer data validation in action." src="https://cdn-images-1.medium.com/max/800/1*43ZIQs_TLcJiz0sbDC8uCg.gif" /><figcaption>Customer data validation in action.</figcaption></figure><p>If you get an error when starting NestJS:</p><blockquote>ReferenceError: Cannot access ‘OrderCustomerDto’ before initialization</blockquote><p>You need to move the OrderCustomerDto declaration before OrderCreateDto, as shown in my code example above.</p><h3>OrderProductDto</h3><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/95e49db47f0ddf08c33e7b0aa2ce3f39/href">https://medium.com/media/95e49db47f0ddf08c33e7b0aa2ce3f39/href</a></iframe><p>Here are the constraints:</p><ul><li>products — should be an array, should not be empty.</li><li>id — should exist in the ProductRepository, and should be an integer value.</li><li>quantity — should be enough of this product available and should be an integer.</li></ul><h4>Solution</h4><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/2bcc8e055bcbffa303c945f2f3020d94/href">https://medium.com/media/2bcc8e055bcbffa303c945f2f3020d94/href</a></iframe><p>Here&#39;s some new things:</p><p>@IsInt() — check the value to be an integer, 100.5 will fail validation.</p><p>@ArrayNotEmpty() — validates that the products array is not empty.</p><p>@ValidateNested({ each: true }) — triggers an array nested validation, so we can effectively validate from 1 to N products.</p><p>ProductIdExists and ProductIsAvailable — are custom rules that check if the product exists and if there’s enough quantity to place an order.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/1*oy7JykFVdq6F1zFtdlc7AQ.gif" /><figcaption>Product data validation in action.</figcaption></figure><h3>OrderShipmentDto</h3><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/caac2bdef3ed544c6ee7b3104220c51b/href">https://medium.com/media/caac2bdef3ed544c6ee7b3104220c51b/href</a></iframe><p>Here are the constraints:</p><ul><li>type — should be a part of the enum (Delivery | Pickup), should be defined.</li><li>if the Delivery type is selected — city and address fields are required strings, not empty.</li><li>if the Pickup type is selected — the point_id field is required, should be an integer, not empty.</li></ul><h4>Solution</h4><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/0e404b0964c2b68e81d92650ffe426e1/href">https://medium.com/media/0e404b0964c2b68e81d92650ffe426e1/href</a></iframe><p>Here’s some new things:</p><p>@Equals(DeliveryTypes.DELIVERY) and @Equals(DeliveryTypes.PICKUP) — will do a strict (===) check of provided value.</p><p>Then, based on the delivery type we validate the nested object. For that, we need a bit more advanced @Type() use case.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/e5ed5ad1a09cce68a3aeed2b145c6ef3/href">https://medium.com/media/e5ed5ad1a09cce68a3aeed2b145c6ef3/href</a></iframe><p>Since the shipment object is of variable type, we include variable type cast for validation and shipment now looks as this: shipment: DeliveryShipmentDto | PickupShipmentDto;</p><p>This allows our validation system to drop fields that are not required by specific delivery objects, i.e., you request Pickup shipment and provide city — this will now be dropped by the class-transformer package.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/1*ZO66JMXwwi0WeRnWI9g4-Q.gif" /><figcaption>Shipment data validation in action.</figcaption></figure><h3>OrderContactDto</h3><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/8abeea5109bdf4bb0825a7924e02ed56/href">https://medium.com/media/8abeea5109bdf4bb0825a7924e02ed56/href</a></iframe><p>Here are the constraints:</p><ul><li>name — should be a string, should be defined.</li><li>phone — should be a valid mobile number.</li><li>email — optional, if defined should be a valid email syntax.</li></ul><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/0e7ae9254877f5fbd38e3fa3f06882b7/href">https://medium.com/media/0e7ae9254877f5fbd38e3fa3f06882b7/href</a></iframe><p>New validations are here:</p><p>@IsMobilePhone(‘en-US’) — will check the phone string to have valid syntax, amount of numbers, etc.</p><p>@IsOptional() — will only validate the email if it was provided.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/712/1*_oz2L7Ju5sor3q4v4nVnrQ.gif" /><figcaption>Contact validation in action.</figcaption></figure><h3>Final Solution</h3><p>OK, it’s been a long path. Now, let’s see a full request object with all the validation decorators:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/05d2b02c15b9591bea1902faa3873694/href">https://medium.com/media/05d2b02c15b9591bea1902faa3873694/href</a></iframe><h3>Bonus: More Common Use Cases for Validation</h3><p>It’s hard to come up with an example that covers all features of request validation. So, here I will list some interesting use cases from my practice:</p><h4>Payment confirmation</h4><p>Use case: webhook call from a third-party card processor (PayPal, Stripe).</p><p>You usually have some form of secret signing key stored in the config and need to validate the request’s signature.</p><p>Here’s an example of a validator that uses injected NestJS config service.</p><h4>Request</h4><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/35499e96eac6e2c725d5ad9aabbc2c97/href">https://medium.com/media/35499e96eac6e2c725d5ad9aabbc2c97/href</a></iframe><p>We will combine order id + amount + secret and compare it with the received signature.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/dd4bbc135e12ab4367b636df0ea866ab/href">https://medium.com/media/dd4bbc135e12ab4367b636df0ea866ab/href</a></iframe><figure><img alt="" src="https://cdn-images-1.medium.com/max/704/1*8abG3F4Iq00-NWcZjPMtAA.gif" /><figcaption>Signature validation in action.</figcaption></figure><h4>Conditional property validation</h4><p>Use case: only validate property if another property is set to a specific value.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/7b1b4fab429003698a844c10c95d4bf0/href">https://medium.com/media/7b1b4fab429003698a844c10c95d4bf0/href</a></iframe><p>Task: only validate the email if subscribe is true.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/706c59ab25d1724882d476ad8cea9ebb/href">https://medium.com/media/706c59ab25d1724882d476ad8cea9ebb/href</a></iframe><figure><img alt="" src="https://cdn-images-1.medium.com/max/704/1*kykbTnIEFrBYdeA-vmbDWQ.gif" /><figcaption>Subscribe validation in action.</figcaption></figure><h4>Duplicate constraint validation without Database query</h4><p>Use case: you have a DB constraint UNIQUE(field1 + field2), but you want to validate this before reaching DB level (and getting the store exception). Your request accepts multiple entities to store at once and it can contain duplicates.</p><p>Here’s an example DTO:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/35ff506984c54eabb200aa4198994d5f/href">https://medium.com/media/35ff506984c54eabb200aa4198994d5f/href</a></iframe><p>And with validation:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/ec219ccf2115cb20467878799bef8bff/href">https://medium.com/media/ec219ccf2115cb20467878799bef8bff/href</a></iframe><figure><img alt="" src="https://cdn-images-1.medium.com/max/704/1*BvpIikOsdBciiIgTPJSrSg.gif" /><figcaption>No duplicate user validation in action.</figcaption></figure><h4>Date range constraints relative to the request</h4><p>Use case: a filter with start and end dates. You expect the end date to be later than the start date.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/a0fcd9027c7f514b0e95cb1cb7d726f9/href">https://medium.com/media/a0fcd9027c7f514b0e95cb1cb7d726f9/href</a></iframe><p>With validation constraints:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/8c5536e01d03d996563e4bad1d7c781b/href">https://medium.com/media/8c5536e01d03d996563e4bad1d7c781b/href</a></iframe><figure><img alt="" src="https://cdn-images-1.medium.com/max/704/1*2LCEU-G8S0-xXwSWNDsenw.gif" /><figcaption>Date range validation in action.</figcaption></figure><h3>Conclusion</h3><p>NestJS and class-validator together play well for request validation. It can cover everything from the simplest to complex validation scenarios.</p><p>Another good point is that you get a standard error response structure in case of failed validation that you don’t have to code yourself — all handled by NestJS.</p><p>The repository with the test code is here: <a href="https://github.com/dkhorev/validating-complex-requests-with-nestjs">https://github.com/dkhorev/validating-complex-requests-with-nestjs</a></p><p>I hope this was helpful. Good luck, and happy engineering!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b55c287f7c99" width="1" height="1" alt=""><hr><p><a href="https://medium.com/better-programming/validating-complex-requests-with-nestjs-a-practical-example-b55c287f7c99">Validating Complex Requests With Nest.js</a> was originally published in <a href="https://betterprogramming.pub">Better Programming</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>