<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Dawid Makowski on Medium]]></title>
        <description><![CDATA[Stories by Dawid Makowski on Medium]]></description>
        <link>https://medium.com/@makowskid?source=rss-a0078533f9fe------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sat, 16 May 2026 14:51:07 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@makowskid/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[The Cloud Is Just Someone Else’s Computer. Sometimes, That Computer Gets Hit by a Drone.]]></title>
            <link>https://medium.com/@makowskid/the-cloud-is-just-someone-elses-computer-sometimes-that-computer-gets-hit-by-a-drone-3c7b261ab924?source=rss-a0078533f9fe------2</link>
            <guid isPermaLink="false">https://medium.com/p/3c7b261ab924</guid>
            <category><![CDATA[aws]]></category>
            <category><![CDATA[war]]></category>
            <category><![CDATA[drones]]></category>
            <category><![CDATA[iran]]></category>
            <dc:creator><![CDATA[Dawid Makowski]]></dc:creator>
            <pubDate>Tue, 31 Mar 2026 08:53:03 GMT</pubDate>
            <atom:updated>2026-03-31T09:00:54.502Z</atom:updated>
            <content:encoded><![CDATA[<h3>When “Multi-Region Strategy” Means “Outrunning a Military Conflict”</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*bQWWA7EPVcR7406YdPqx6Q.png" /></figure><p>So last week a military drone blew up the AWS data center where my customer’s platform runs. The platform serves millions of users across seven countries. I had to spend about a week moving everything from Bahrain to Europe. By hand. Because every single automated migration tool was also broken. Because, you know, the drones.</p><p>I run a software consultancy. I’ve been in tech long enough to have planned for almost every disaster imaginable. Floods, earthquakes, ransomware, that one guy who drops the production database on a Friday afternoon. “Military drone strike on your cloud provider” was never on the list.</p><p>And Bahrain is not an isolated case. Right now, data centers in more than ten countries are being targeted or threatened by either Iranian or russian drones. This isn’t a regional incident. It’s a global pattern.</p><p>And yet, here we are. Welcome to DevOps in 2026.</p><h3>Disaster Recovery Used to Mean Hurricanes. Now It Means Drones.</h3><p>If you’ve worked in infrastructure long enough, you’ve imagined the disaster scenarios. An earthquake takes out a data center in Tokyo. Hurricane floods a facility in Virginia. Maybe a biblical-scale power outage somewhere in Texas (actually, that one happens pretty regularly). You build for resilience, you plan your failovers, you sleep slightly less terribly at night.</p><p>And I don’t say “earthquake” lightly. Exactly a year ago, my wife and I were on the top floor of our skyscraper condo in Bangkok when a 7.7 magnitude earthquake hit. One second I was pushing a commit. The next second I was crawling on the floor. The building was swaying two meters each side, and water from the rooftop pool came crashing through into our living room. I still get flashbacks from that. So yes, I understand natural disasters on a very personal, visceral level. I expected those to be the thing that would eventually force me to move servers under pressure.</p><p>What I never rehearsed was: “Your entire AWS region is down because a military drone hit all availability zones in Bahrain.”</p><p>Yet here we are.</p><p>In early March, Iranian drones struck multiple AWS facilities across the UAE and Bahrain. This wasn’t some theoretical threat model from a security conference whiteboard. This was the first confirmed military attack on a major hyperscale cloud provider’s infrastructure. Banking apps went down. Payment systems collapsed. Delivery platforms across the Gulf went dark. And somewhere in Thailand, my phone started buzzing with messages from a very worried customer in Saudi Arabia.</p><h3>There’s No Terraform Module for Surviving a War Zone</h3><p>Here’s what you need to understand about the week that followed: every single automated migration tool AWS provides was broken. CloudWatch, the thing that tells you if your servers are even alive? Gone. RDS Snapshots, the thing you use to back up databases before you touch anything? Unavailable. Cross-region transfer? Dead. AMI copies? Nope.</p><p>It was like showing up to a house fire and discovering that not only is your fire truck empty, but someone also stole the hydrant.</p><p>So I did what any reasonable engineer would do. I had to rebuilt multiple production environments from scratch, on bare Linux images, in Europe. By hand. For a platform serving millions of users across seven countries. I wrote custom scripts to export, compress, and transfer everything over the public internet (because AWS’s own internal backbone between regions was also down). I wrote manual rescue scripts for files that kept failing for days with InternalError. I worked nights because often it was the only window where platform traffic was low enough to safely verify everything.</p><p>One week of controlled chaos. And by the end of it, the entire platform was running smoothly from Europe, as if nothing had happened.</p><p>But everything had happened.</p><h3>We’re a Software Company. Why Do We Keep Running From Wars?</h3><p>I could tell this story as a purely technical narrative. Here’s the architecture, here’s the migration plan, here’s the clever script that saved the day. But that would miss the point entirely.</p><p>Because here’s what my day-to-day actually looks like:</p><p>I run a small tech consultancy. We build custom software. We manage cloud infrastructure. We automate businesses with AI workflows. Very normal stuff. And yet somehow, every single person on my team has been touched by war. Not metaphorically. Literally.</p><p>I live in Thailand, which recently had skirmishes with Cambodia along the border. My Iranian engineer had to flee Iran with his entire family. One of my coworkers lives in Ukraine, literally in a war zone, delivering code between power cuts because the grid keeps getting hit by Iranian-designed drones. A couple of months ago he went to an immigration office across the border and couldn’t come back for days because russians bombed the only bridge on his route home. Another colleague had to evacuate Ukraine with his whole family.</p><p>We write code and configure servers. We’re not defense contractors. We’re not geopolitical analysts. We’re developers who just want to ship clean code and go home.</p><p>And yet, every week, somewhere on this planet, a conflict reaches through the internet cables and grabs us by the collar.</p><h3>The Strangest Plot Twist of 2026</h3><p>And now, in what might be the most unexpected geopolitical crossover episode of the decade: Ukraine is protecting Saudi skies.</p><p>Let that sink in for a second. The country that has been fighting for its own survival since 2022, that has become the world’s foremost expert on shooting down drones because it had no choice, has just signed defense cooperation agreements with Saudi Arabia, Qatar, and the UAE. Over 200 Ukrainian drone-countering specialists are now deployed across the Gulf, helping defend the very region where my customer’s servers used to live.</p><p>The same drones that forced me to migrate infrastructure out of Bahrain? Ukraine knows those drones intimately. They’ve been dealing with their Iranian-made cousins, the Shaheds, for years.</p><p>So now the country of my colleague who codes between blackouts is also the country protecting the airspace above my customer’s business. If you wrote this as fiction, your editor would tell you it’s too on the nose.</p><h3>Who Else Is Living This?</h3><p>I can’t be the only one. There must be thousands of engineers, sysadmins, CTOs, and DevOps folks out there who have spent the last few years making decisions that no technical manual covers. Moving workloads because of missiles. Rerouting traffic because of sanctions. Keeping systems alive through infrastructure that’s being actively targeted.</p><p>If you’ve had to migrate production systems because of armed conflict, I’d love to hear your story.</p><h3>The New Normal (Which Is Not Normal At All)</h3><p>Twenty years ago, your biggest infrastructure worry was a hard drive failing or router dropping packets. Ten years ago, it was maybe a ransomware attack. Today, it’s a state-sponsored drone strike on your cloud provider’s physical data center.</p><p>We’ve entered an era where “disaster recovery” needs to account for actual disasters of the military kind. Where your multi-region strategy isn’t just about latency and compliance, it’s about geopolitical risk assessment.</p><p>The conflicts we see on the news aren’t happening “over there” anymore. They’re happening inside our dashboards, our uptime monitors, our incident channels. Every single one of us in tech is connected to these events whether we like it or not.</p><p>The world got very small, and very complicated, very fast.</p><p><em>Originally published at </em><a href="https://dawidmakowski.com/en/2026/03/the-cloud-is-just-someone-elses-computer-sometimes-that-computer-gets-hit-by-a-drone/"><em>https://dawidmakowski.com</em></a><em> on March 31, 2026.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=3c7b261ab924" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[You Just Gave OpenClaw the Keys to Your Entire Digital Life — on a VPS Server You Don’t Know How to…]]></title>
            <link>https://medium.com/@makowskid/you-just-gave-openclaw-the-keys-to-your-entire-digital-life-on-a-vps-server-you-dont-know-how-to-b31f59b18579?source=rss-a0078533f9fe------2</link>
            <guid isPermaLink="false">https://medium.com/p/b31f59b18579</guid>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[openclaw]]></category>
            <category><![CDATA[ai-agent]]></category>
            <dc:creator><![CDATA[Dawid Makowski]]></dc:creator>
            <pubDate>Tue, 24 Feb 2026 15:05:22 GMT</pubDate>
            <atom:updated>2026-02-24T15:07:20.930Z</atom:updated>
            <content:encoded><![CDATA[<h3>You Just Gave OpenClaw the Keys to Your Entire Digital Life — on a VPS Server You Don’t Know How to Secure</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1000/1*4caXnY9xhtxTR92NZahosA.png" /><figcaption>chatgpt</figcaption></figure><p>You Put OpenClaw on a VPS. It Has Access to Everything. You Secured None of It. Let’s Fix That in 30 Minutes.</p><p><em>This guide was written for the brave souls who saw the OpenClaw hype train, jumped aboard, spun up their first VPS, and then had the terrifying realization that they now need to learn Linux security. You’re going to be fine. Probably.</em></p><p>Look, I get it. You saw the hype. You saw the tweets. You saw some guy on Reddit say OpenClaw changed his life, and now you’re sitting there at 2 AM with your very first VPS, a fresh install of <a href="https://openclaw.ai/">OpenClaw</a>, and the sudden realization that you just handed an AI assistant the keys to your email, your calendar, your Google Drive, your private documents, and basically your entire digital soul — all running on a server you’ve never secured before because, well, this is your first server.</p><p>Now, let’s be fair: a fresh Ubuntu installation isn’t actually a house with no doors. Ubuntu ships with sensible defaults — no unnecessary open ports, no sketchy services running, SSH with reasonable settings. Credit where credit is due. <strong>But here’s the problem:</strong> you’re not running a fresh Ubuntu installation anymore. You’re running OpenClaw on top of it. With plugins. And skills. And extensions. And API keys to everything you own.</p><p>Here’s a fun fact to help you sleep tonight: every VPS that connects to the internet gets fully port-scanned by automated bots within <strong>10 to 20 minutes</strong>. Not hours. Not days. <em>Minutes.</em></p><p>But with OpenClaw and its ecosystem of plugins exposing additional services and APIs? Now you’re <em>interesting.</em> And on the internet, you do not want to be interesting.</p><p>And look — I’m not here to trash OpenClaw. It’s genuinely cool. The AI-assistant-on-your-own-server dream is alive and well. But let’s be honest with ourselves for a moment: OpenClaw, plus all of its plugins, skills, and extensions, is not exactly what security researchers would call “airtight.” It’s more what they’d call “a fun afternoon.” The platform is young, moving fast, and the attack surface grows with every skill you install. The real risk isn’t Ubuntu’s defaults — it’s everything you’re bolting on top of them.</p><p>So since we can’t fix OpenClaw’s security overnight, let’s make damn sure that the server underneath it is locked down tight, so that even if someone finds a vulnerability in a plugin, they hit a brick wall instead of a buffet.</p><p><strong>The good news?</strong> I recommend Ubuntu for your VPS, and I’m going to walk you through this whole thing step by step.</p><p><strong>Why Ubuntu?</strong> Because it has a massive community, a mountain of security tools, and — this is the important part — any non-technical noob can harden it in about 30 minutes with proper guidance.</p><p>Let’s go. And please, for the love of everything, <strong>don’t close your terminal until I tell you to.</strong></p><h3>Step 0 — Non-root user</h3><p><strong>Before we do anything else</strong> — if you’re still logged in as root like some kind of digital cowboy, we need to fix that immediately. Root is the god account. It can do anything, break anything, and delete anything, including itself.</p><p>So let’s create a proper user and give it sudo powers:</p><pre>adduser ubuntu usermod -aG sudo ubuntu</pre><p>Now, because typing your password every time you run sudo gets old approximately 4 seconds after the first time, let&#39;s set up passwordless sudo. Run visudo and add this line at the very bottom:</p><pre>ubuntu ALL=(ALL) NOPASSWD:ALL</pre><p>From this point on, you do everything as ubuntu and use sudo when you need elevated privileges. Think of root as the emergency fire axe behind glass - it&#39;s there if you need it, but you shouldn&#39;t be casually swinging it around on a Tuesday afternoon.</p><p>Now copy your SSH key to the new user ( ssh-copy-id or manually paste it into /home/ubuntu/.ssh/authorized_keys), log in as ubuntu in a new terminal to make sure it works, and <strong>never log in as root again.</strong></p><h3>Step 1: Set Your Timezone</h3><p>Before we do anything dramatic, let’s make sure your server knows what time it is. This sounds trivial, but accurate timestamps in your logs are the difference between “I can see exactly when someone broke in” and “something happened… at some point…</p><pre>dpkg-reconfigure tzdata</pre><p>Pick your timezone from the menu. It’s interactive.</p><h3>Step 2: Update Everything</h3><p>Your server shipped with software that was already outdated by the time you clicked “Deploy.” Let’s fix that.</p><pre>apt update &amp;&amp; apt upgrade -y</pre><p>This updates all your packages to their latest versions, patching known vulnerabilities. Think of it as putting on pants before leaving the house. Bare minimum.</p><p>Now, because we both know you’re going to forget to do this regularly (I know you, and I love you, but I know you), let’s set up <strong>automatic security updates</strong>:</p><pre>apt install unattended-upgrades -y sudo dpkg-reconfigure --priority=low unattended-upgrades</pre><p>This configures your server to automatically install critical security patches without you having to remember. It’s like hiring a tiny robot butler whose only job is to lock the doors you keep leaving open.</p><h3>Step 3: Set Up Ubuntu Pro</h3><p>Ubuntu Pro gives you expanded security maintenance, kernel live patching, and compliance tools. And here’s the kicker — <strong>it’s free for up to 5 machines.</strong></p><p>Go to <a href="https://ubuntu.com/pro">ubuntu.com/pro</a>, grab your token, and attach it:</p><pre>pro attach YOUR_TOKEN_HERE</pre><p>This extends your security coverage and covers thousands of additional packages. It’s like getting the extended warranty, except it actually does something.</p><h3>Step 4: Lock Down SSH</h3><p>SSH is how you talk to your server. It’s also how <em>everyone else</em> tries to talk to your server. By default, it’s running on port 22, which is the first port every bot on the internet checks. That’s like hiding your house key under the doormat — the one place literally everyone looks first.</p><p>Edit your SSH config:</p><pre>nano /etc/ssh/sshd_config</pre><p>Here’s what your config should look like. I’ll explain each line, because I respect you and your journey:</p><pre>Port 55222 # Move SSH off the default port. Not foolproof, but stops 90% of really lazy scanners. LoginGraceTime 2m # You get 2 minutes to authenticate. After that, goodbye. PermitRootLogin no # NOBODY logs in as root. Ever. Not even you. Especially you. MaxAuthTries 5 # 5 wrong passwords and we hang up on you. Rude? Maybe. Secure? Yes. PasswordAuthentication no # No passwords. Period. Keys only. Passwords are the cargo shorts of security. PermitEmptyPasswords no # Just... no. Come on. AllowUsers ubuntu # Only the &#39;ubuntu&#39; user can log in. Everyone else can go home. X11Forwarding no # No graphical forwarding. This is a server, not a gaming PC. PermitUserEnvironment no # Don&#39;t let users set environment variables through SSH. Trust issues? You bet. AllowAgentForwarding no # No SSH agent forwarding. Reduces the risk of key theft. AllowTcpForwarding no # No TCP tunneling through your server. It&#39;s not a VPN. PermitTunnel no # Same energy as above. No tunnels. KbdInteractiveAuthentication yes # Needed for 2FA (we&#39;ll get there, be patient). ChallengeResponseAuthentication yes # Also needed for 2FA. The dynamic duo. AuthenticationMethods publickey,keyboard-interactive # Key first, then 2FA code. Belt AND suspenders. UsePAM yes # Use PAM for authentication. Required for Google Authenticator.</pre><p><strong>The key takeaways:</strong> We moved the SSH port (so bots can’t find it easily), disabled root login (so even if someone gets in, they’re not god), killed password authentication (keys only, like a VIP club), and set up the groundwork for two-factor authentication.</p><p>Now reload SSH so it actually pays attention to what we just told it:</p><pre>sshd -t &amp;&amp; systemctl reload ssh.service</pre><p>The sshd -t part tests your config first. If there&#39;s a typo, it&#39;ll tell you before you lock yourself out. Because locking yourself out of your own server is a very special kind of pain.</p><p><strong>⚠️ CRITICAL: Do NOT close your current terminal session yet. Open a NEW terminal and test that you can still connect with your new settings before closing anything.</strong></p><pre>ssh -i /path/to/your-key -p 55222 ubuntu@your-server-ip</pre><h3>Step 5: Install Fail2Ban</h3><p>Fail2Ban watches your authentication logs and automatically bans IP addresses that fail to log in too many times. It’s basically a nightclub bouncer for your server.</p><pre>apt install fail2ban -y</pre><p>Out of the box, Fail2Ban will monitor SSH and ban anyone who fails authentication repeatedly. You can customize the jail settings later, but the defaults are already pretty solid for keeping the riff-raff out.</p><p>Think of it this way: Step 4 made it harder to get in. Step 5 makes sure that anyone who keeps trying gets permanently shown the door.</p><h3>Step 6: Set Up Two-Factor Authentication</h3><p>This is the big one. This is where we go from “pretty secure” to “okay, now I can actually sleep at night.”</p><p>Two-factor authentication means that even if someone somehow gets your SSH key (it happens — laptops get stolen, backups get leaked, your cat walks across your keyboard and emails it to someone), they STILL can’t get in without the 6-digit code from your phone.</p><p><strong>Install Google Authenticator:</strong></p><pre>apt install libpam-google-authenticator -y</pre><p><strong>Switch to your ubuntu user and run the setup:</strong></p><pre>su - ubuntu google-authenticator</pre><p>It’ll ask you some questions. Here are the correct answers (you’re welcome):</p><p>It’ll show you a QR code. Scan it with Google Authenticator, Authy, or whatever TOTP app you prefer. <strong>And for the love of all that is holy, SAVE THE EMERGENCY SCRATCH CODES.</strong><br>Put them in a password manager (important!). They’re your “break glass in case of emergency” codes if you lose your phone.</p><p><strong>Configure PAM</strong> (this tells SSH to actually <em>use</em> the authenticator):</p><pre>sudo nano /etc/pam.d/sshd</pre><p>Add this line <strong>at the very top:</strong></p><pre>auth required pam_google_authenticator.so</pre><p>And <strong>comment out</strong> this line (to prevent a double password prompt, which is annoying and unnecessary):</p><pre># @include common-auth</pre><p><strong>Make sure your SSH config has these lines set correctly</strong> (most of them should already be right from Step 4):</p><pre>KbdInteractiveAuthentication yes ChallengeResponseAuthentication yes AuthenticationMethods publickey,keyboard-interactive UsePAM yes PasswordAuthentication no</pre><p><strong>Test the config and restart SSH:</strong></p><pre>sshd -t &amp;&amp; systemctl restart ssh</pre><p><strong>Now test in a NEW terminal</strong> (seriously, keep your current session open — are you sensing a pattern here?):</p><pre>ssh -i /path/to/your-key -p 55222 ubuntu@your-server-ip</pre><p>Your login flow should now be: <strong>SSH key → TOTP verification code → you’re in.</strong> No password involved. Just your key and your phone. It’s like a secret handshake, but actually secure.</p><h3>Step 7: Monitor Your Logs</h3><p>Congratulations, your server is now significantly more secure than it was 20 minutes ago. But you need to actually check on things occasionally.</p><p><strong>Check your SSH authentication logs:</strong></p><pre>sudo cat /var/log/auth.log | grep sshd</pre><p><strong>For live monitoring</strong> (great for watching scans/attacks happen in real-time, which is weirdly entertaining):</p><pre>sudo tail -f /var/log/auth.log</pre><p>What to look for:</p><ul><li><strong>Repeated failed login attempts</strong> — Fail2Ban should catch these, but check anyway</li><li><strong>Login attempts from unfamiliar IP addresses</strong> — If you see IPs you don’t recognize, investigate</li><li><strong>Unknown usernames</strong> — If someone’s trying to log in as “admin” or “test,” that’s a bot</li><li><strong>Successful logins at weird hours</strong> — If you logged in at 3 AM and you were asleep at 3 AM, we have a problem</li></ul><h3>Bonus Round: For the Ambitious</h3><p>If you’ve made it this far and you’re feeling confident (possibly <em>too</em> confident, but I respect the energy), here are two next-level options:</p><h3>Tailscale</h3><p><a href="https://tailscale.com/">Tailscale</a> creates a private mesh VPN between your devices. Once set up, you can access your server through a private network that isn’t exposed to the public internet at all. It’s like having a secret tunnel to your server that only you know about. The setup is shockingly simple for something this powerful.</p><h3>Cloudflare Tunnel</h3><p><a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/">Cloudflare Tunnel</a> lets you expose your OpenClaw instance to the internet without opening ANY inbound ports on your server. Zero. None. The server reaches out to Cloudflare, and Cloudflare handles all incoming traffic. It’s like having a P.O. Box for your server — people can send you mail, but they don’t know where you live.</p><p>Both of these are excellent options if you want to take your security from “solid” to “paranoid, but like, in a healthy way.”</p><h3>Final Thoughts</h3><p>If you’re running OpenClaw on a VPS, you’ve put your most private digital life on a server connected to the open internet. Your emails. Your calendar. Your documents. Your credentials. All of it sitting there, protected by whatever security you bothered to set up.</p><p>Is your server now impenetrable? No. Nothing is impenetrable. But you’ve gone from being a soft, delicious target to being the server that’s maybe not worth the effort today when there are millions of easier ones to hit.</p><p>You’ve got this. Probably. I believe in you. Mostly.</p><p><em>Originally published at </em><a href="https://dawidmakowski.com/en/2026/02/you-just-gave-openclaw-the-keys-to-your-entire-digital-life-on-a-vps-server-you-dont-know-how-to-secure/"><em>https://dawidmakowski.com</em></a><em> on February 24, 2026.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b31f59b18579" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Your Penetration Test Report Just Landed. Read This Before You Panic.]]></title>
            <link>https://medium.com/@makowskid/your-penetration-test-report-just-landed-read-this-before-you-panic-202663f69a38?source=rss-a0078533f9fe------2</link>
            <guid isPermaLink="false">https://medium.com/p/202663f69a38</guid>
            <category><![CDATA[pentesting]]></category>
            <category><![CDATA[security]]></category>
            <dc:creator><![CDATA[Dawid Makowski]]></dc:creator>
            <pubDate>Tue, 17 Feb 2026 10:27:54 GMT</pubDate>
            <atom:updated>2026-02-17T10:28:10.637Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*P6d1c2ciQQN29ZRo.png" /></figure><p>I’ve watched the same movie play out too many times: a management team receives a penetration testing report, sees a wall of findings with scary-sounding names, and immediately assumes their platform is on fire.</p><p>It’s not on fire. It’s almost never on fire.</p><p>But the report sure makes it <em>look</em> like it is. And that’s the problem I keep running into — not with the platforms, but with how people read these reports.</p><h3>The Translation Problem</h3><p>When you’re a CTO or an external CTO-as-a-Service advisor, part of the job is translating between the world of security tooling and the world of business decision-making. These two worlds speak very different languages. Security tools speak in volumes of automated findings. Business leaders speak in risk, cost, and “should I be worried right now?”</p><p>That gap is where the panic starts.</p><p>Over the years I’ve learned to get ahead of it. Before every VAPT (Vulnerability Assessment and Penetration Testing) cycle, I walk my clients through what to expect from the results — what the findings actually mean, what’s noise, and what deserves real attention. It’s part education, part expectation management, and part gentle reminder that a 200-page PDF full of findings does not mean the sky is falling. Sometimes it just means the scanner was very thorough and a bit too enthusiastic.</p><p>The goal is simple: give non-technical stakeholders the mental framework to read a VAPT report without losing sleep. Because the report is only half of the story. The other half — the part that actually matters — is interpreting those findings in the context of <em>your</em> platform, <em>your</em> architecture, and <em>your</em> specific business requirements.</p><h3>The Anatomy of a VAPT Report (For Humans)</h3><p>Here’s what most people don’t realize about penetration testing: the raw output of any engagement is never the final verdict on your security. It’s a starting point for analysis.</p><p>VAPT teams rely on automated scanning tools to generate their initial findings. These tools are designed to cast an absurdly wide net. They flag anything that <em>could</em> theoretically be a concern. And I mean <em>anything</em>. Your OAuth integration with Google? Flagged. Your CDN serving static assets from a different domain? Flagged. A cookie that JavaScript can access because your entire framework was literally designed that way? You better believe that’s flagged. Any open port on the server, even port 80 or 443? Yup, also flagged.</p><p>This isn’t a flaw in the process. It’s how the process works. The tools are doing their job. The question is what happens next.</p><h3>The Quality Gap Nobody Talks About</h3><p>And here’s where it gets interesting.</p><p>Not all VAPT teams are created equal. In fact, there’s a pretty dramatic quality spectrum, and where your team falls on it determines whether you receive a useful, contextualised security assessment or a PDF-shaped anxiety attack.</p><p><strong>Budget-oriented teams</strong> tend to optimise for volume. They run the tools, collect the output, and forward everything to the client with minimal filtering. The result? A report with dozens — sometimes hundreds — of findings, many of which are informational noise or outright false positives. It looks impressive. It fills a lot of pages. But it creates exactly the kind of alarm that derails productive conversations about actual security.</p><p>I’ve seen reports where the same exact finding was listed separately for every URL on the platform. Same issue, same root cause, same “vulnerability” — just presented 147 times to make the PDF thicker.</p><p><strong>More experienced teams</strong> — and yes, they typically cost more — invest significant effort in triaging their tool output before presenting it. They separate signal from noise. They tell you what actually matters and why. They cross-reference previous engagement results instead of re-investigating known behaviours from scratch. Their reports are shorter, more accurate, and infinitely more useful. You’re paying for judgment, not just scanning hours.</p><h3>Severity Levels: A Quick Decoder Ring</h3><p>Every VAPT report categorises findings by severity. Here’s the practical translation:</p><p><strong>Critical and High</strong> — Stop what you’re doing and fix these. These represent real, exploitable vulnerabilities. In a well-maintained platform with regular dependency updates, strong authentication, and proper encryption, these should be rare. If your report is full of them, you have a genuine problem. If it has zero, congratulations — that’s the goal.</p><p><strong>Medium and Low</strong> — Read these with a calm mind. They often represent theoretical risks, hardening suggestions, or configuration preferences. Many are informational. Think of them as a security consultant saying “you <em>could</em> also do this” rather than “your house is currently on fire.”</p><p><strong>Informational</strong> — These are diagnostic notes. They describe how your platform behaves. They don’t indicate risk. You can acknowledge them and move on.</p><p>The number of findings in a report tells you almost nothing about how secure your platform is. A report with 150 findings and zero criticals is a dramatically better result than one with 5 findings and 2 criticals.</p><h3>False Positives: The Uninvited Guests</h3><p>Every — and I mean <em>every</em> — VAPT engagement produces false positives. These are findings that automated tools flag as potential issues but which, upon analysis, turn out to be expected framework behaviours, design decisions, or artefacts of the cloud infrastructure itself.</p><p>In a recent engagement, we documented over 20 false positives across two reports. The cloud provider’s own security infrastructure was triggering alerts during the scan — the scanning tools were essentially detecting the host’s defence systems and reporting them as application vulnerabilities. That’s like a home inspector flagging your alarm system as a security risk. Technically, something happened. Practically, it’s the opposite of a problem.</p><h3>Context Is Everything</h3><p>If there’s one thing I want people to take away from this, it’s this: <strong>a VAPT report must always be read in the context of the specific platform it was conducted against.</strong></p><p>Security is not a one-size-fits-all discipline. A finding that represents a genuine vulnerability on one platform could be an intentional design decision on another. Session tokens in URLs? Alarming — unless they’re part of a standard OAuth handshake with a provider like Google or Twitter, in which case they’re temporary, scoped, and exactly where they’re supposed to be. Cross-domain script includes? Suspicious — unless they’re loading Google’s reCAPTCHA or your SSO integration, in which case they’re essential.</p><p>The report is half of the truth. The contextual analysis is the other half. Without both, you’re making decisions based on incomplete information — and in my experience, those decisions tend to lean toward unnecessary panic and wasted remediation effort.</p><p>If you have a VAPT cycle coming up, prepare your stakeholders before the report lands. It’ll save you a week of damage-control conversations that didn’t need to happen.</p><p><em>Originally published at </em><a href="https://dawidmakowski.com/en/2026/02/your-penetration-test-report-just-landed-read-this-before-you-panic/"><em>https://dawidmakowski.com</em></a><em> on February 17, 2026.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=202663f69a38" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Moltbook — the hottest “social network for AI agents” — got hacked by someone opening a browser…]]></title>
            <link>https://medium.com/@makowskid/moltbook-the-hottest-social-network-for-ai-agents-got-hacked-by-someone-opening-a-browser-65f272c7e005?source=rss-a0078533f9fe------2</link>
            <guid isPermaLink="false">https://medium.com/p/65f272c7e005</guid>
            <category><![CDATA[moltbook]]></category>
            <dc:creator><![CDATA[Dawid Makowski]]></dc:creator>
            <pubDate>Sat, 07 Feb 2026 06:31:27 GMT</pubDate>
            <atom:updated>2026-02-07T06:31:27.237Z</atom:updated>
            <content:encoded><![CDATA[<h3>Moltbook — the hottest “social network for AI agents” — got hacked by someone opening a browser and reading the page source.</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/0*74R9KVBT8L8L_Gnr" /><figcaption>Source: Wiz Reasearch</figcaption></figure><p>I’ll let that sink in for a moment.</p><p>The platform everyone was losing their minds over last week, the one Andrej Karpathy called “the most incredible sci-fi takeoff-adjacent thing” — just had its entire database cracked open.</p><p>Wiz published a brilliant breakdown of how they did it (link below), and the punchline is almost too good: their advanced hacking technique was looking at the JavaScript. That’s it. That’s the penetration test.</p><p>Here’s what was sitting there, unprotected, like a diary left open as chope at the hawker center:<br>- 1.5 million API authentication tokens<br>- 35,000 email addresses<br>- Thousands of private messages between “agents”<br>- Plaintext OpenAI API keys that users shared in DMs (yes, plaintext, in 2026)<br>- Full read AND write access to the entire production database</p><p>The root cause? A Supabase API key hardcoded in client-side JavaScript with zero Row Level Security configured. For non-technical folks: that’s like locking your front door but leaving the key taped to it with a sticky note that says “PLEASE DON’T USE THIS.”</p><p>But wait — it gets better.</p><p>The platform claimed 1.5 million AI agents. The database revealed 17,000 actual humans behind them. That’s an 88:1 ratio. No rate limiting. No verification whether an “agent” was actually AI or just a guy with a for loop and a free afternoon. The revolutionary AI social network was largely… humans pretending to be bots pretending to be autonomous, controlling bots.</p><p>The creator publicly said he “didn’t write one line of code.” The entire platform was vibe-coded. And look — I love AI-assisted development. I use it daily. But there’s a difference between using AI to write code and using AI to replace understanding what your code does.</p><p>Vibe coding without security review is like building a bank out of cardboard because it went up really fast.</p><p>This is not an edge case. This is the pattern. Every few weeks we see another vibe-coded app shipped to production with the security posture of a weekend hackathon project — except this time it’s handling millions of API keys and real user data.</p><p>The lessons aren’t new. They’re embarrassingly old:<br>- Don’t hardcode secrets in frontend JavaScript<br>- Enable Row Level Security if you’re using Supabase (it exists for a reason)<br>- Rate limit account creation<br>- Don’t store credentials in plaintext<br>- If you vibe-code it, at least have someone who understands security review it before launch</p><p>None of this is cutting-edge advice. It’s the basics. It’s chapter one. And somehow we keep skipping it because shipping fast feels more important than shipping safe.</p><p>Fixing vibe-coded apps is literally what my team does for a living. We audit, patch, and harden applications that were built fast but need to be built right. Drop me a DM if you want us to take a look at your app.</p><p>Full technical breakdown by Wiz: <a href="https://www.wiz.io/blog/exposed-moltbook-database-reveals-millions-of-api-keys">https://www.wiz.io/blog/exposed-moltbook-database-reveals-millions-of-api-keys</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=65f272c7e005" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How Questions Can Win You the Job]]></title>
            <link>https://medium.com/@makowskid/how-questions-can-win-you-the-job-1ae10de26f23?source=rss-a0078533f9fe------2</link>
            <guid isPermaLink="false">https://medium.com/p/1ae10de26f23</guid>
            <category><![CDATA[interview]]></category>
            <category><![CDATA[interview-tips]]></category>
            <category><![CDATA[job-hunting]]></category>
            <category><![CDATA[jobs]]></category>
            <category><![CDATA[interview-questions]]></category>
            <dc:creator><![CDATA[Dawid Makowski]]></dc:creator>
            <pubDate>Wed, 24 Sep 2025 06:15:12 GMT</pubDate>
            <atom:updated>2025-09-24T06:22:10.411Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*mLT6Zn1vE2gqCYfO.jpg" /><figcaption><em>Photo by </em><a href="https://unsplash.com/@miinrad?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash"><em>Mina Rad</em></a><em> on </em><a href="https://unsplash.com/photos/a-man-sitting-at-a-desk-talking-to-a-woman-FAWfiEh096E?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash"><em>Unsplash</em></a></figcaption></figure><p>An interview should never feel like a test. It should feel like a sales call where your questions lead the way.</p><p>If you are a candidate, you need to ask more questions than you are asked. The company is not your examiner. It is your potential customer.</p><p>Think like a salesperson. Build a relationship. Show that you care. And yes, I just had to write this after the recent round of interviews because I really hate boring ones.</p><p><strong>The Painful Pattern of Silence</strong></p><p>I interview 10 to 20 people every month, mostly for technical roles. The most painful pattern is silence. I’m speaking with smart, capable adults who still walk into interviews with a school exam mindset.</p><p>I ask candidates to ask questions at least twice. First after I present the project and the client. Then again towards the end of the conversation. I even spell it out: you can ask me about anything, not just the job, please.</p><p>And yet, most often I still hear crickets. That is the worst-case scenario for the candidate.</p><p><strong>Why Questions Matter More Than Answers</strong></p><p>The number of questions should be balanced on both sides. That is how you create a real conversation. If the candidate asks more questions than the interviewer, that is even better.</p><p>Most interviewers do not expect to answer more questions than they ask. Which is exactly why the candidates who do it stand out.</p><p>Because asking questions is how you leave a lasting positive impression. It is how you sometimes open doors you did not even know were there. Maybe you land the job. Maybe you get recommended for a different one that fits you better.</p><p><strong>So do not just show up ready to answer. Show up ready to ask.</strong></p><p>What is the best question you have ever asked in an interview or wish you had?</p><p><em>Originally published at </em><a href="https://dawidmakowski.com/en/2025/09/how-questions-can-win-you-the-job/"><em>https://dawidmakowski.com</em></a><em> on September 24, 2025.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1ae10de26f23" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Vibe Coding Is Fun-But Vibe Refactoring Pays the Bills]]></title>
            <link>https://medium.com/@makowskid/vibe-coding-is-fun-but-vibe-refactoring-pays-the-bills-f547174511e8?source=rss-a0078533f9fe------2</link>
            <guid isPermaLink="false">https://medium.com/p/f547174511e8</guid>
            <category><![CDATA[coding]]></category>
            <category><![CDATA[vibe-coding]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[refactoring]]></category>
            <dc:creator><![CDATA[Dawid Makowski]]></dc:creator>
            <pubDate>Sat, 26 Apr 2025 07:32:58 GMT</pubDate>
            <atom:updated>2025-04-26T07:38:39.367Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*VBC2GweacYMPDcHZ.png" /></figure><p>There’s a lot of hype about <strong>vibe coding</strong> -that moment when caffeine hits, your playlist slaps, and you hammer out code like a jazz drummer sprinting through a solo. It’s exhilarating, but relying on that adrenaline burst is like funding your retirement with scratch-offs.</p><p>So let’s flip the script to something that actually compounds: <strong>vibe refactoring</strong>. Same spontaneous energy, but aimed at shrinking technical debt and sharpening your architecture instead of amping up the commit count.</p><p><a href="https://www.youtube.com/watch?v=M8WdXRuzHvU">https://www.youtube.com/watch?v=M8WdXRuzHvU</a></p><h3>A Quick Setup</h3><p>Block <strong>15–20 minutes</strong> on your calendar. No ticket, no KPI, no “reduce cyclomatic complexity by 13.7 %” targets. Just a promise to poke around your codebase with beginner eyes.</p><h3>Step-by-Step (a.k.a. How the Magic Happens)</h3><ol><li><strong>Open the IDE and wander</strong><br>Pretend you cloned this repo yesterday. Warnings and TODOs suddenly look less like background noise and more like neon signs that say <em>“Fix me, champ.”</em> Clear a few of those-you’ll feel lighter already.</li><li><strong>Let the IDE nag you</strong><br>Hover over the yellow squiggles. Remove the unused imports, tame the long methods, rename the variable that’s secretly been haunting you since 2019. Each micro-win is a quick dopamine hit.</li><li>This bundles context so large-language models can actually understand your project’s ecosystem instead of hallucinating a new service layer “for funsies.”</li><li><strong>Chat with an LLM</strong><br>Drop a knotty function into your favorite model and ask:<br><em>“Any cleaner way to handle this?”</em> or<br><em>“Spot an N+1 query?”</em> or<br><em>“Got an indexing tip for this desperate JOIN?”</em><br>It’s brutally honest rubber-duck debugging-and it never rolls its eyes.</li><li><strong>Follow the rabbit hole</strong><br>A “quick” fix often blossoms into a <strong>two-hour</strong> code spa. Let it. Today’s detour is tomorrow’s velocity boost.</li></ol><h3>What You’ll Gain (Besides a Self-Esteem Spike)</h3><ul><li><strong>Compounding quality</strong>: Tiny weekly tweaks snowball into major stability over months.</li><li><strong>Faster deployments</strong>: Fewer regressions means release day stops feeling like Russian roulette.</li><li><strong>Happier teammates</strong>: A clean codebase is onboarding heaven-no more haunted-house tours for new hires.</li><li><strong>Satisfied customers</strong>: Snappier queries and smoother UX, even if they can’t pinpoint why things suddenly <em>feel</em> better.</li></ul><h3>Keep It Light, Keep It Weekly</h3><p>No formal goals, no pressure. Cue your favorite playlist, celebrate micro-wins in Slack (GIFs encouraged), and rotate refactor buddies so fresh eyes stay fresh. This is maintenance rebranded as exploration-curiosity, but profitable.</p><h3>Final Thought</h3><p><strong>Vibe coding</strong> gives you adrenaline.<br><strong>Vibe refactoring</strong> gives you longevity.</p><p>Block the time, wander through the code, and watch the compound interest kick in. Do it once and you’ll feel lighter; do it every week and you’ll wonder how you ever shipped without it.</p><p><em>Originally published at </em><a href="https://dawidmakowski.com/en/2025/04/vibe-coding-is-fun-but-vibe-refactoring-pays-the-bills/"><em>https://dawidmakowski.com</em></a><em> on April 26, 2025.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f547174511e8" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Improving Software Development, One Tiny Kaizen Step at a Time]]></title>
            <link>https://medium.com/@makowskid/improving-software-development-one-tiny-kaizen-step-at-a-time-16aa76fa6530?source=rss-a0078533f9fe------2</link>
            <guid isPermaLink="false">https://medium.com/p/16aa76fa6530</guid>
            <category><![CDATA[kaizen]]></category>
            <category><![CDATA[quality]]></category>
            <category><![CDATA[software-development]]></category>
            <dc:creator><![CDATA[Dawid Makowski]]></dc:creator>
            <pubDate>Sun, 09 Mar 2025 09:30:29 GMT</pubDate>
            <atom:updated>2025-03-09T09:58:16.085Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/500/0*6nDlRdVNxTmOw4RN.jpg" /><figcaption><em>Photo by </em><a href="https://unsplash.com/@dylandgillis?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash"><em>Dylan Gillis</em></a><em> on </em><a href="https://unsplash.com/photos/people-sitting-on-chair-in-front-of-table-while-holding-pens-during-daytime-KdeqA3aTnBY?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash"><em>Unsplash</em></a></figcaption></figure><p>More than a decade ago, I had the opportunity to take a <a href="https://en.wikipedia.org/wiki/Total_quality_management">Total Quality Management (TQM)</a> course by none other than <a href="https://pl.wikipedia.org/wiki/Andrzej_Blikle">Andrzej Blikle</a> at <a href="https://www.asbiro.pl/">ASBIRO</a>, a unique Polish educational institution where only entrepreneurs teach entrepreneurship.</p><p>Since then, I’ve been testing and refining various TQM techniques with my teams, especially in the startup world. I’m sharing some of my observations and ideas I’ve implemented, with a focus on weekly Kaizen-style sessions, which have proven effective in a variety of real-life scenarios within our teams.</p><p>Andrzej Blikle is a prominent Polish entrepreneur, well-known for his work in quality management and as the leader of the Blikle family confectionery business, which is famous for creating iconic and irresistible Polish “ <a href="https://blikle.pl/">A.Blikle donuts</a> “ (pączki). He has expanded the company while preserving its legacy, transforming it into a modern, quality-driven organization.</p><p>Total Quality Management (TQM) is a comprehensive, organization-wide approach focused on continuous improvement and long-term success, involving all members of the organization. Kaizen, on the other hand, is a specific technique within TQM that emphasizes making small, incremental improvements on a daily basis to enhance processes and eliminate inefficiencies. The key takeaway was that for any organization to thrive, it needs to evolve continuously with the contributions of all its members (management included).</p><p>In the fast-paced world of tech startups, where ideas flood in and deadlines loom, it’s easy to lose sight of the bigger picture. When you’re buried in daily tasks like coding and designing, managing large-scale projects can feel overwhelming. With product teams often numbering in the dozens, it’s essential to have methods in place to maintain both product quality and an efficient software production process.</p><p>It wasn’t until about 10 years ago that I introduced weekly Kaizen-format meetings to all my teams. I wanted something simple but impactful. These meetings became a cornerstone of our process improvement journey. Why? Because they allow for consistent feedback and provide a platform for team members to raise issues that, if left unchecked, might snowball into larger problems.</p><p>Here’s how it works: every week, I bring together all members of my product and tech teams. <strong>No silos. </strong>The last day of the week usually works best. Developers, designers, product managers — everyone. The goal is to give them a space to vent, to share their frustrations, and to point out obstacles that hinder progress. And no, it’s not a “let’s complain about the boss” session. We focus on real, actionable issues.</p><p>A simple question like <strong><em>“What are you complaining about this week?”</em></strong> opens the door for all kinds of insights. It’s the best way to start a conversation because it allows everyone to express their concerns, whether it’s about the process, communication issues, or even something as trivial as a lack of coffee in the office. I literally used to remind people every week: “We have a kaizen session on Friday, each of you — bring your complaints please!”.</p><p>Why not just ask for improvement ideas first, right? Well, here’s where it gets a bit counterintuitive. Asking people to propose improvements can often lead to more question marks than solutions. From my experience, the ideas you get from this question tend to be more “nice-to-haves” rather than actual problems. It’s only after you’ve had a few months of these sessions with your team, and everyone is on the same page, that you can start throwing this question around. By then, people are already coming up with improvement ideas on their own — you don’t even need to ask.</p><p>At first, convincing people to participate in candid discussions can be challenging, especially in cultures where openly pointing out problems, particularly with management, may feel uncomfortable. The key is to emphasize that these discussions aren’t about criticizing individuals but about improving processes. By focusing on solutions, the atmosphere becomes one of constructive feedback rather than blame. The “<strong><em>why</em></strong>” behind these sessions needs to be clearly explained from the start.</p><p>Once we’ve discussed the issues, we assign tasks to be resolved before the next meeting. If a developer struggles with a tool, we’ll make sure they have the resources to get it right. If communication within the team is lacking, we’ll work on new strategies. The important thing is to keep the meetings action-oriented, with clear follow-up on the solutions.</p><p>In the beginning, it was tough to convince people to share openly, but now, it’s an essential part of our culture. These meetings provide an opportunity for the team to breathe, to voice concerns, and, most importantly, to take ownership of the problems and their solutions. This simple but effective approach has led to long-term benefits, not only in productivity but also in team morale and cohesion.</p><p>So, why do I swear by Kaizen meetings? They keep the momentum going. In a world full of deadlines and constant pressure, it’s easy to lose sight of the bigger picture. These small, regular adjustments keep us aligned, help us solve problems before they become roadblocks, and ensure we’re constantly improving. And let’s face it, a little bit of complaining now and then is the perfect way to stay connected and keep things moving forward.</p><p>One unexpected benefit of these meetings is that they help raise awareness and address technical debt over time. By encouraging regular feedback and tackling issues as they arise, we can break down and resolve accumulated tech debt into smaller, more manageable pieces.</p><p>As a CTO, I’ve found that Kaizen meetings are the best way to foster true ownership within each team member. When people feel they have a direct impact on the business, product and production process, they start to recognize that everyone faces challenges and that there’s always room to improve. These meetings send a clear message: as a team, we have the power to make our work-life less miserable every week.</p><p><strong>What steps are you taking to improve your team’s production process? I’d love to hear your approach.</strong></p><p><em>Originally published at </em><a href="https://dawidmakowski.com/en/2025/03/improving-software-development-one-tiny-kaizen-step-at-a-time/"><em>https://dawidmakowski.com</em></a><em> on March 9, 2025.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=16aa76fa6530" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Simplicity, Microservices & Other Lies]]></title>
            <link>https://medium.com/@makowskid/simplicity-microservices-other-lies-a599582deb44?source=rss-a0078533f9fe------2</link>
            <guid isPermaLink="false">https://medium.com/p/a599582deb44</guid>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[coding]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[careers]]></category>
            <category><![CDATA[programming]]></category>
            <dc:creator><![CDATA[Dawid Makowski]]></dc:creator>
            <pubDate>Wed, 12 Feb 2025 06:41:07 GMT</pubDate>
            <atom:updated>2025-02-12T07:16:36.458Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/500/0*Lfm79aA48ydlD3hv.jpg" /><figcaption><em>Photo by </em><a href="https://unsplash.com/@ffstop?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash"><em>Fotis Fotopoulos</em></a><em> on </em><a href="https://unsplash.com/photos/black-remote-control-on-red-table-6sAl6aQ4OWI?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash"><em>Unsplash</em></a></figcaption></figure><p>Software development has a way of humbling you.</p><p>Early in my career, I was <em>certain</em> about a lot of things-best practices, the “right” way to code, how teams <em>should</em> work. Over time, I realized that many of those “truths” weren’t universal. Some were outright wrong. And some-well, present-me now finds hilarious. The more code I write-and the more late-night debugging sessions I survive-the more I see how few absolutes exist in this field.</p><p>Chris Kiehl recently wrote a fantastic blog post reflecting on <strong>10 years in software development</strong>, sharing lessons that resonated deeply with me. After <strong>25+ years</strong> of hands-on coding, I can say that these lessons become <strong>even more true</strong> over time. Consider this article my chance to highlight the big mindset shifts that creep in after decades in the trenches-plus a few comedic jabs I wish someone had told me earlier.</p><p>Check Chris’ article here: <a href="https://chriskiehl.com/article/thoughts-after-10-years">https://chriskiehl.com/article/thoughts-after-10-years</a></p><h3>1. Simplicity is a full-time job</h3><p>Everyone agrees that “good code is simple.” Turns out, <em>keeping</em> it simple takes <strong>constant effort</strong>. Leave code alone for a month, and it starts sprouting complexities like weeds. If your code is an over-engineered labyrinth, get the lawnmower and trim it down, do it weekly.</p><h3>2. Complexity isn’t a flex</h3><p>There was a time I took pride in deciphering arcane systems. Now, I take pride in making them <strong>disappear</strong>. If your codebase requires a 12-hour onboarding session complete with flowcharts that look like cryptic treasure maps, you haven’t built something genius; you’ve built something that keeps new hires up at night.</p><h3>3. Typed languages aren’t a luxury-they’re essential</h3><p>Once, I thought dynamic typing was freedom: no constraints, just vibes. Then came 3 AM bug hunts caused by a single mis-typed variable that turned everything into confetti.</p><p>Types aren’t a cage; they’re the guardrails that keep a drowsy driver from veering off a cliff. Especially on teams with varying skill levels, type systems are a lifeline. Spend time up front clarifying function inputs and outputs, and you’ll avoid rummaging through logs at ungodly hours. Especially when you keep hearing from all sides the ever-green: “our customer does not allow us extra time for unit tests…”.</p><h3>4. Java is actually great… because it’s boring</h3><p>Any language that <em>just works</em>, has stable tooling, and doesn’t feel compelled to reinvent itself every six months is an asset, not a relic.</p><p>Give me “boring and reliable” over “exciting and unpredictable” if we’re running a business. Java, C#, Go-even PHP-prove that reliability can generate real value. Not everything has to be the new shiny toy. In production environments, <strong>stability &gt; hype</strong> every time.</p><h3>5. Most programming happens before you write a single line of code</h3><p>Real engineering happens <em>before</em> you crack open your IDE:</p><ul><li>Understanding the actual problem</li><li>Scoping the requirements and data models.</li><li>Designing a clean, maintainable solution</li><li>Analysing how new changes impact 3rd party integrations</li><li>Listing out all the edge cases, etc.</li><li>Drafting a minimal PRD that later will evolve into an actual documentation</li></ul><p>Sitting down to code is the last step. Junior devs often jump straight into coding, only to discover midway that they misunderstood the spec. If you plan like a pro, you can code like one, too. One of my team’s favourite proverbs used to be “ <em>one week of coding can save you 2 hours of planning</em> “.</p><h3>6. Frontend development is a Kafkaesque nightmare</h3><p>I used to love frontend. Over the years, it became more like stepping into a haunted mansion where the floor plan changes daily.</p><p>New frameworks, shifting best practices, impossible state management battles-some folks thrive on that adrenaline. I realized I like my sanity intact. If you’re a frontend whiz, hats off to you, but keep an eye on that floor. It might vanish tomorrow.</p><h3>7. ORMs are not the magic solution. Just write the SQL. (But also… use the right tool.)</h3><p>ORMs seem like a time-saver-until they produce a query so monstrous it stops your database in its tracks. Yes, SQL is an old language, but it’s powerful for a reason. Learn it, use it, and you’ll avoid half the performance nightmares that come with one-size-fits-all abstractions.</p><p>Still, it’s not a black-and-white debate:</p><ul><li>Small-to-mid projects? <strong>ActiveRecord-style ORMs</strong> are fine.</li><li>Large-scale apps? <strong>Query builders &amp; raw SQL</strong> are your friend.</li><li>Reporting or analytics? ORMs usually create queries that could put your DB on life support.</li></ul><p>The key is knowing the right tool for each scenario.</p><h3>8. Good management is invaluable</h3><p>Bad management is far too common in this industry. But a truly good manager? That can mean the difference between a thriving team and an early burnout.</p><p>A great manager removes roadblocks, fights off scope creep, and helps you grow instead of piling on tasks with no guidance. It’s not just about being technically sharp; it’s about emotional intelligence and shielding your team from chaos.</p><p><strong>The main responsibility of a tech manager is to keep everyone, especially top management, grounded in reality.</strong></p><h3>9. The query planner is a cruel mistress</h3><p>You may think you know how your database works-until you look at an execution plan that betrays all your assumptions. Indexes are fantastic until they aren’t. Queries are blazing fast until they’re not.</p><p>Database tuning is an art form, best approached with patience, a willingness to read logs, and possibly a stiff drink. Or three.</p><h3>10. Code quality matters more than speed. (Not always.)</h3><p>Early in my career, I obsessed over writing “beautiful” code. But in a real-world environment, deadlines, business demands, and trade-offs run the show.</p><p>Sometimes, shipping on time is more important than achieving conceptual purity. There’s no point polishing code that never makes it into production. The trick is knowing <strong>when</strong> to make compromises and <strong>how</strong> to keep track of the debt you’ll inevitably have to pay back.</p><h3>11. Green tests don’t mean working software</h3><p>100% test coverage can be a false sense of security. I’ve seen entire test suites pass while the real application burned quietly in the corner.</p><p>Tests are only as good as the scenarios you write. A passing test suite that never checks critical business logic is a glorified thumbs-up. Always aim for the right tests, not just a high coverage number. <strong>At a minimum, test the real integration points-especially with third-party systems.</strong></p><h3>12. Git history should be useful, not perfect</h3><p>When I started, I wanted a Git history that looked like a curated museum exhibit. Then reality (and a few weekend crunch sessions) happened.</p><p>The truth is, clarity beats aesthetics. Over-rebasing can wipe out the breadcrumb trail that helps you debug. A perfectly linear commit tree won’t matter if it takes you hours to figure out where a breaking change originated.</p><p>Keep it readable, keep it useful-leave the perfectionism at the door. GitFlow is great, but make sure you’re team is big enough to actually need it. Smaller teams usually need a dedicated “mini-gitflow” version of the process.</p><h3>13. “Best practices” are usually just “best practices for someone else”</h3><p>Younger devs sometimes assume that best practices are holy commandments. Those with a few more battles under their belts know that “best” depends on your team’s skill set, your business constraints, and your performance or scaling needs.</p><p>In other words, Google’s best practice might not help your three-person startup. <strong>Context rules everything.</strong></p><h3>14. Scaling is a problem you wish you had</h3><p>I used to over-engineer everything, anxiously anticipating the moment a thousand new users would crash my app. Turns out, most projects never see that kind of user load.</p><p>If you don’t have an actual scaling challenge yet, don’t spend your life building a fancy architecture for a party that might never happen. Get customers first, and then scale when you actually need to. Otherwise, you’re just building a castle in the sky.</p><h3>15. “Rewrite it from scratch” is almost always a bad idea</h3><p>Sure, rewriting your entire codebase sounds refreshing. But the reality is you’ll reintroduce old bugs, lose hidden fixes, and burn countless hours rewriting what already (sort of) works. Meanwhile, your boss wonders why features aren’t being delivered.</p><p>Refactor where you can, rewrite only if you must. “Burn it all down” often looks good on paper, but it’s typically a development time sinkhole.</p><p>P.S. This does not apply to abandoned legacy codebases, typically built on prehistoric dependencies, libraries, and frameworks-or the complete lack thereof.</p><h3>16. Microservices aren’t the answer</h3><p>Ah, microservices: the darling of modern architecture. Also the cause of many a meltdown when you realize how much overhead they introduce.</p><p>If you’re not operating at Google-scale, you probably don’t need a microservices labyrinth. A monolith is almost always cheaper, simpler, and easier to maintain. Don’t adopt microservices just because it’s the trendy thing to do.<strong> Build a monolith first, then start modularizing it after a year after the first production deployment.</strong></p><h3>17. You don’t need the “latest and greatest” tools</h3><p>Chasing trends can be like sprinting on a hamster wheel.</p><ul><li>React or Vue? Doesn’t matter if your product is still stuck in planning.</li><li>Rust or Go? Doesn’t matter if you can’t hire or train developers for them or if your boss/customer doesn’t provide you with a higher budget for this kind of positions.</li><li>Kubernetes? You do not need that complexity unless you’re orchestrating dozens of services.</li></ul><p>Pick tools that help you ship now and that your team can realistically manage. The goal is to solve problems, not to collect badges.</p><h3>18. You don’t need a CS degree to be a great developer</h3><p>A CS degree is helpful, sure. But it’s not a prerequisite for excellence. Many fantastic devs have non-traditional backgrounds, taught themselves online, or switched careers from something completely different.</p><p>That said, you should still learn fundamental algorithms, data structures, and system design. If you skip formal education, make sure you fill in those gaps.</p><h3>19. Writing good documentation is a superpower</h3><p>Poor documentation can make even the best code borderline useless. The ability to write clear, helpful docs sets you apart-seriously.</p><p>When new developers (or future-you) come on board, well-structured docs can save them days of frustration. If you want job security, become the one who writes (and updates) docs that people can actually follow.</p><h3>20. Most dev jobs are about glue code, not groundbreaking innovation</h3><p>We all dream of building the next big thing, but most dev gigs involve hooking up APIs, gluing together libraries, or debugging library conflicts. That’s not a bad thing-it’s just reality.</p><p>A lot of software engineering is about problem-solving within constraints, not rewriting the rules of computing. Embrace the “glue code” aspect, because it’s what keeps the wheels turning.</p><h3>21. Your biggest productivity boost isn’t a new framework-it’s focus</h3><p>You can pick React, Vue, Svelte, or a random library you found on Reddit. If you’re getting pinged by Slack messages every five minutes, your productivity is still going down the drain.</p><p>Focus is the true superpower. Deep work, minimizing interruptions, small hacks like noise-cancelling headphones and <strong>focusing on real output over busywork</strong> will improve your dev speed more than any fancy new tech.</p><h3>22. You will never fully catch up with everything. And that’s okay.</h3><p>It’s tempting to try to know every new JavaScript framework or every new Docker-like tool. But there’s simply too much out there. Once you accept that, you can focus on learning the fundamentals and the big-picture concepts that apply across technologies.</p><p>Being able to adapt quickly to new tools and understanding the fundamentals is far more valuable than memorizing every corner of the ecosystem.</p><h3>23. Seniority isn’t about knowing more-it’s about making better trade-offs</h3><p>A senior developer isn’t someone who can recite every array method in 15 languages; it’s the person who knows which problems are worth solving.</p><p>Senior devs say no to unnecessary features, push back on unrealistic deadlines, and avoid overengineering. They’re the ones who understand the cost of adding complexity-and they’re comfortable advocating simpler solutions.</p><p>It’s about <strong>making the right trade-offs</strong> and knowing <strong>which problems are worth solving.</strong></p><h3>24. Debugging is a skill. Logging is an art.</h3><p>Yes, debugging means hunting down the cause of a bug. But experienced devs know that proper logging can prevent a hunt in the first place.</p><ul><li>A single well-placed log statement can save you hours of guesswork.</li><li>Log <strong>the right things</strong> (not just everything).</li><li>Use <strong>structured logs</strong>, not random print statements.</li><li>Make logs <strong>actionable</strong>-timestamps, correlation IDs, and useful context.</li><li>Learn to use tools that process and search through logs fast</li></ul><p>If your log files read like a jumbled mess, you’re making life harder for yourself (and your teammates). Logging is an area that is most often completely overlooked by developers and considered the least important, rather than being treated as one of the key elements in improving software quality.</p><h3>25. Senior devs make the team better, not just themselves</h3><p>Early in your career, you might think a “senior” is the one who solo-crushes every difficult task. But real seniority is about enabling the entire team to thrive.</p><ul><li>Your job is to unblock others, not just yourself.</li><li>Hoarding knowledge makes you a bottleneck, not a genius.</li><li>Mentorship is crucial-if your team isn’t growing, you’re failing.</li></ul><p>A senior dev who invests in mentorship, documentation, and automation elevates the whole group.</p><p>And in 10 years? I’ll probably look back at this post and laugh at today-me, too. That’s the job-constant change, constant growth, and a dash of self-deprecation.</p><p><strong>What’s your biggest dev mindset shift?</strong> Let me know below. I’d love to see how your perspective has evolved over time.</p><p><em>Originally published at </em><a href="https://dawidmakowski.com/en/2025/02/simplicity-microservices-other-lies-we-tell-ourselves/"><em>https://dawidmakowski.com</em></a><em> on February 12, 2025.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a599582deb44" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Why Choose SharpAPI for Workflow Automation over LLM model]]></title>
            <link>https://medium.com/@makowskid/why-choose-sharpapi-for-workflow-automation-over-llm-model-9fc177df5fca?source=rss-a0078533f9fe------2</link>
            <guid isPermaLink="false">https://medium.com/p/9fc177df5fca</guid>
            <category><![CDATA[api]]></category>
            <category><![CDATA[automation]]></category>
            <category><![CDATA[workflow]]></category>
            <category><![CDATA[ai]]></category>
            <dc:creator><![CDATA[Dawid Makowski]]></dc:creator>
            <pubDate>Sat, 07 Dec 2024 14:34:18 GMT</pubDate>
            <atom:updated>2024-12-07T14:34:18.023Z</atom:updated>
            <content:encoded><![CDATA[<h3>Why Choose SharpAPI for Workflow Automation over raw LLM model</h3><p>Dec 7, 2024</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*o3SlioXFX--xhKUq.jpg" /><figcaption><em>Photo by </em><a href="https://unsplash.com/@campaign_creators?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash"><em>Campaign Creators</em></a><em> on </em><a href="https://unsplash.com/photos/man-writing-on-white-board---kQ4tBklJI?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash"><em>Unsplash</em></a></figcaption></figure><p>When it comes to workflow automation, integrating AI into your business can feel like deciding whether to climb a mountain or take the gondola. Sure, both get you to the top, but one is infinitely easier and less sweaty. In the world of AI-driven automation, the gondola is <a href="https://SharpAPI.com/">SharpAPI</a>-the streamlined solution that helps you implement workflow automations faster and more effectively than wrestling with a Large Language Model (LLM) API directly. Let’s break down why <a href="https://SharpAPI.com/">SharpAPI</a> is the smarter, more efficient choice for your automation needs.</p><h3>1. SharpAPI Brings AI to Real-World Workflows</h3><p>When you work directly with an LLM API like OpenAI’s GPT-4, it’s like being handed a box of unassembled LEGO pieces and being told, “Good luck building a spaceship.” Sure, the potential is there, but you need to design, structure, and refine everything yourself.</p><p>SharpAPI, on the other hand, offers <strong>industry-specific workflows</strong> right out of the box. It’s preconfigured to handle:</p><ul><li><strong>E-commerce needs:</strong> Automate product categorization, generate descriptions, and translate listings.</li><li><strong>HR workflows:</strong> Parse resumes, create job descriptions, and analyze candidate data.</li><li><strong>Content automation:</strong> Summarize articles, paraphrase text, and analyze reviews.</li><li><strong>Marketing solutions:</strong> Build SEO-friendly content, craft email campaigns, and generate multilingual content.</li></ul><p>With SharpAPI, there’s no need to reinvent the wheel. These workflows are ready to go, saving you time and effort.</p><h3>2. No Fine-Tuning Headaches</h3><p>LLM APIs are incredibly powerful but notoriously generic. To get them to perform a specific task, you need to fine-tune prompts or models-a time-consuming process that requires expertise, trial-and-error, and sometimes a bit of luck. Directly integrating with an LLM API often requires in-depth knowledge of AI. You’ll spend time crafting prompts, managing model parameters, and debugging inconsistent outputs. SharpAPI eliminates that guesswork by providing predefined endpoints designed for specific tasks. These endpoints deliver consistent, reliable results without requiring you to become an AI whisperer.</p><p>For example:</p><ul><li>Need a product categorized into specific categories? SharpAPI’s categorization API gets it right without lengthy prompt engineering.</li><li>Want to summarize a review? SharpAPI delivers clear, concise results-no additional training or tweaking needed.</li></ul><h3>3. SharpAPI Saves You Time and Development Effort</h3><p>Integrating a raw LLM API into your application often means building custom workflows, managing input/output structures, and handling edge cases yourself. With SharpAPI, those workflows are already built and battle-tested.</p><p>SharpAPI provides:</p><ul><li><strong>SDKs and tools</strong>: SharpAPI provides plug-and-play SDKs for multiple languages and tech stacks, including PHP, Laravel, Python, JavaScript, and .NET. This means developers can integrate the API into their systems in hours-not weeks.</li><li><strong>Standardized outputs</strong>: Each API delivers consistent formats, saving you from the frustration of normalizing raw LLM outputs yourself.</li><li><strong>Extensive documentation</strong>: Clear guides and examples to help developers get started quickly.</li><li><strong>API monitoring tools</strong>: Easily track job history, rerun failed jobs, and manage quotas directly from the SharpAPI dashboard.</li><li><strong>Dedicated support</strong>: Whether through forums, email, or chat, SharpAPI’s team is there to help.</li></ul><p>In short, SharpAPI does the heavy lifting so your team can focus on building features instead of troubleshooting AI pipelines. It takes just a few minutes to seamlessly integrate SharpAPI’s workflows into your existing software.</p><h3>4. Compliance? Covered.</h3><p>Handling sensitive data in today’s world means walking a tightrope of regulations and security concerns. With <strong>SharpAPI’s data handling and compliance framework</strong>, you can automate workflows without second-guessing your setup:</p><ul><li><strong>GDPR compliance</strong> ensures all personal data is processed securely.</li><li>Policies align with global standards, so your app stays ahead of regulatory challenges.</li><li>End-to-end encryption keeps your data safe and sound.</li></ul><p>This isn’t just a checkbox for SharpAPI-it’s a fundamental part of how the platform operates. No more scrambling to build a compliance setup from scratch, as you might with an LLM API. More on this topic in <a href="https://sharpapi.com/data-handling-and-compliance">Data Handling and Compliance</a> article.</p><h3>5. Self-Improving Workflows</h3><p>SharpAPI isn’t static-it’s always learning and improving. Once integrated, the platform optimizes workflows and updates itself with improved models and data handling without requiring your intervention.</p><p>This “set it and forget it” approach means your workflows keep getting smarter over time, so you can focus on scaling your business instead of maintaining your automation tools.</p><p>While SharpAPI does use powerful models like OpenAI’s GPT-4, it doesn’t stop there. The platform actively tests and integrates new AI models like Sonnet and Gemini to ensure the best results for specific tasks. It’s not just a “use-this-one-model” solution-it’s a dynamic, evolving platform designed to stay ahead of the curve.</p><h3>6. Scalable and Tested for Growth</h3><p>SharpAPI is built to grow with you. Whether you’re running a lean startup or managing an enterprise-level operation, the platform scales seamlessly.</p><p>It is designed for real-world use cases, with scalability built-in:</p><ul><li>Predefined rate limits: SharpAPI ensures fair usage without unpredictable throttling.</li><li>High-performance infrastructure: Its backend is optimized to handle enterprise-level demands without downtime or lag.</li><li>Available also via a wide range of <a href="https://sharpapi.com/en/automation-platforms">API Marketplace</a> with their unified integration capabilities.</li></ul><h3>What Makes SharpAPI Better Than LLM APIs Directly?</h3><p>Here’s the short version:</p><ul><li><strong>Purpose-built workflows:</strong> SharpAPI takes LLM capabilities and refines them for specific tasks.</li><li><strong>Faster integration:</strong> No need for extensive fine-tuning or setup-just plug and play.</li><li><strong>Compliance out of the box:</strong> Handle sensitive data confidently with pre-built security and regulatory safeguards.</li><li><strong>Developer-friendly tools:</strong> Easy APIs, comprehensive SDKs, and helpful documentation.</li><li><strong>Smarter over time:</strong> Workflows improve automatically as models are updated.</li><li><strong>Focus on scalability:</strong> Built to handle projects of any size with minimal effort.</li></ul><h3>SharpAPI: Automating Workflows Without the Hassle</h3><p>Here’s the bottom line: integrating an LLM API directly can feel like navigating uncharted waters without a map-possible, but unnecessarily complicated. SharpAPI takes the raw power of AI models and refines it into polished, industry-ready workflows, offering a faster, easier, and more cost-effective solution.</p><p>Instead of spending time building and fine-tuning workflows from scratch, SharpAPI hands you ready-to-use tools tailored to your needs, whether you’re running an e-commerce site, automating HR tasks, or managing global content. It’s the well-charted route, guiding you smoothly to your destination while saving you from the headaches, wasted time, and unnecessary costs of a DIY approach.</p><p>By choosing <a href="https://SharpAPI.com/">SharpAPI</a>, you’re empowering your team to focus on innovation rather than implementation. Let SharpAPI handle the heavy lifting-your workflows, your efficiency, and your sanity will thank you.</p><p><em>Originally published at </em><a href="https://sharpapi.com/en/blog/post/why-developers-should-choose-sharpapi-over-operating-ai-models-manually"><em>https://sharpapi.com</em></a><em>.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=9fc177df5fca" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Effortless Translations with AI in Laravel Nova]]></title>
            <link>https://medium.com/@makowskid/effortless-translations-with-ai-in-laravel-nova-1c389078665c?source=rss-a0078533f9fe------2</link>
            <guid isPermaLink="false">https://medium.com/p/1c389078665c</guid>
            <category><![CDATA[automation]]></category>
            <category><![CDATA[web-development]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[laravel]]></category>
            <category><![CDATA[translation]]></category>
            <dc:creator><![CDATA[Dawid Makowski]]></dc:creator>
            <pubDate>Mon, 11 Nov 2024 08:48:06 GMT</pubDate>
            <atom:updated>2024-11-11T08:48:06.345Z</atom:updated>
            <content:encoded><![CDATA[<p>Nov 11, 2024</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*GVfjJ5crxF6L8sZ8.jpg" /><figcaption>Photo by Cherry Lin on Unsplash</figcaption></figure><p>Alright, imagine this: you’ve got a Laravel Nova dashboard, a list of content fields in multiple languages, and a burning desire to automate translations because, let’s be real, manually doing it is not exactly a great time.</p><p>Enter the <strong>SharpAPI AI Translator for Laravel Nova</strong>. This package seamlessly plugs <a href="https://sharpapi.com/en/catalog/ai/content-marketing-automation/advanced-text-translator">AI-powered translation</a> directly into your Nova dashboard, eliminating repetitive translation tasks and freeing you up to focus on the good stuff.</p><blockquote><em>Want to see all the package details? Head over to GitHub: </em><a href="https://github.com/sharpapi/nova-ai-translator"><em>https://github.com/sharpapi/nova-ai-translator</em></a></blockquote><h3>What Exactly Does This Package Do?</h3><p>In a nutshell, it combines the <a href="https://spatie.be/docs/laravel-translatable/">Spatie’s laravel-translatable package</a> with the superpowers of SharpAPI’s AI, transforming those content fields in your app into effortlessly translatable assets. The result? A new action on your Nova dashboard called <strong>🤖 Initiate AI Translation</strong> that takes care of the translation work for you.</p><p>From the Nova resources list or the edit screen, you can queue up translations between any configured languages directly in Nova, with the AI taking over as soon as you hit the button. Need to translate a blog post from English to Spanish? It’s handled.</p><h3>Who’s This For?</h3><p>If you’re a Laravel Nova user managing content in multiple languages, this package is for you. It’s ideal for teams that regularly work with internationalized apps and need content quickly translated without manually flipping through Google Translate. Imagine all that time saved when your content auto-magically translates itself right from Nova!</p><h3>Setting Up the SharpAPI AI Translator</h3><h3>Requirements</h3><p>Make sure you’re running:</p><ul><li><strong>Laravel</strong>: ^9.0+</li><li><strong>Laravel Nova</strong>: 4.0+</li><li><strong>PHP</strong>: 8.0+</li><li>And have <strong>spatie/laravel-translatable</strong> installed</li></ul><p>You’ll also need an account at <a href="https://sharpapi.com/">SharpAPI.com</a> for API access, but we’ll get to that.</p><h3>Installation</h3><ul><li><strong>Install the Package</strong>:</li></ul><pre>composer require sharpapi/nova-ai-translator</pre><ul><li><strong>Configure API Access</strong>: Add your API key from SharpAPI to your .env:</li></ul><pre>SHARP_API_KEY=your-sharp-api-key</pre><ul><li><strong>Set Up Supported Languages</strong>: Define your locales in config/app.php under the locales key:</li></ul><pre>return [<br>    &#39;locales&#39; =&gt; [<br>        &#39;en&#39; =&gt; &#39;English&#39;,<br>        &#39;es&#39; =&gt; &#39;Spanish&#39;,<br>        &#39;fr&#39; =&gt; &#39;French&#39;,<br>        // Add any other languages your app needs<br>    ],<br>];</pre><ul><li><strong>Add to Your Nova Resource Models</strong>: Your translatable models should use:</li><li>The HasTranslations trait from Spatie.</li><li><strong>[Highly Recommended]</strong> The Actionable and Notifiable traits to track actions.</li></ul><p>Here’s a quick setup for, say, a BlogPost model:</p><pre>namespace App;<br><br>use Laravel\Nova\Actions\Actionable;<br>use Illuminate\Notifications\Notifiable;<br>use Spatie\Translatable\HasTranslations;<br><br>class BlogPost<br>{<br>    use Actionable, Notifiable, HasTranslations;<br><br>    protected $translatable = [&#39;title&#39;, &#39;subtitle&#39;, &#39;content&#39;];<br>}</pre><ul><li><strong>Integrate the TranslateModel Action</strong>: Hook the TranslateModel action into your Nova resource by adding it to the actions array:</li></ul><pre>use SharpAPI\NovaAiTranslator\Actions\TranslateModel;<br><br>public function actions()<br>{<br>    return [<br>        (new TranslateModel())-&gt;enabled(),<br>    ];<br>}</pre><ul><li><strong>Enable Queues</strong>: This action uses a queue to handle translations asynchronously, so make sure your queue is ready to go.</li></ul><h3>Using the TranslateModel Action in Nova</h3><p>Once integrated, the action lives right in your Nova resource. Here’s how it works:</p><ul><li><strong>Kickstart AI Translation</strong>: Open the action either from the resources list or from the edit view of any resource.</li><li><strong>Example: Triggering the Action from the Edit View</strong></li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*YsmyPAsWQWk92eV5.png" /></figure><ul><li><strong>Select Translation Settings</strong>: A form lets you pick the source and target languages and even set the tone. You’ll also see a list of fields that will be translated, so there are no surprises.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*eDjRDCDhJKxpoJRV.png" /></figure><ul><li><strong>Hit Translate and Relax</strong>: Once you confirm, the action checks if the target fields are already populated. If they are, it gently suggests that you clear them before proceeding. Assuming all systems are go, it queues the translation job. You can even keep an eye on it if you’re using the Actionable and Notifiable traits.</li><li><strong>Track Progress and Logs</strong>: Nova’s action log feature helps track the translations. This is handy if you need to debug any issues or just like seeing AI in action.</li><li><strong>Example: Translation Log in Action</strong></li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*wStZ9cGTJTw-FyXb.png" /></figure><ul><li><strong>Example: Error Handling (if it goes sideways)</strong></li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*EkJqx4L_XyZSEP-w.png" /></figure><h3>Tips &amp; Tricks</h3><ul><li><strong>Set It and Forget It</strong>: This setup lets you queue translations without worrying about timing or load. It’s especially useful for scaling multilingual apps without scaling translation tasks.</li><li><strong>Translation Strategy</strong>: Fine-tune how often you trigger translations based on the volume and frequency of content updates.</li><li><strong>Localization Needs?</strong>: Since this setup integrates with spatie/laravel-translatable, you get the best of both worlds: structured localization with the muscle of an AI translation.</li></ul><p>With SharpAPI AI Translator for Laravel Nova, your app’s translation game just got a massive upgrade with its new <a href="https://sharpapi.com/en/tag/laravel">Laravel AI</a> capabilities. Give it a spin, and let us know how it works for you!</p><p><em>Originally published at </em><a href="https://sharpapi.com/en/blog/post/effortless-translations-with-ai-in-laravel-nova"><em>https://sharpapi.com</em></a><em>.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1c389078665c" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>