<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Truong (Jack) Luu on Medium]]></title>
        <description><![CDATA[Stories by Truong (Jack) Luu on Medium]]></description>
        <link>https://medium.com/@jackluucoding?source=rss-ecb43e11ebcf------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sat, 16 May 2026 13:01:34 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@jackluucoding/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Claude Code leaked: 512,000 lines expose AI’s security illusion]]></title>
            <link>https://medium.com/@jackluucoding/claude-code-leaked-512-000-lines-expose-ais-security-illusion-59aae7b6af86?source=rss-ecb43e11ebcf------2</link>
            <guid isPermaLink="false">https://medium.com/p/59aae7b6af86</guid>
            <category><![CDATA[cybersecurity]]></category>
            <category><![CDATA[claude-code]]></category>
            <category><![CDATA[llm]]></category>
            <dc:creator><![CDATA[Truong (Jack) Luu]]></dc:creator>
            <pubDate>Sat, 04 Apr 2026 17:43:01 GMT</pubDate>
            <atom:updated>2026-04-04T17:43:01.611Z</atom:updated>
            <content:encoded><![CDATA[<p><a href="https://www.wired.com/story/security-news-this-week-hackers-are-posting-the-claude-code-leak-with-bonus-malware/">Anthropic accidentally leaked Claude Code’s entire source code</a> this week. Nearly 2,000 TypeScript files, over 512,000 lines of code, dumped onto GitHub through a packaging error. The company scrambled to issue copyright takedowns. They assured everyone no customer data leaked.</p><p>I believe that the code leak itself is the least interesting part. What matters is what happened next. Within hours, attackers started <a href="https://www.bleepingcomputer.com/news/security/claude-code-leak-used-to-push-infostealer-malware-on-github/">reposting the leaked code with malware embedded</a>. Within days, researchers used that same code to <a href="https://www.securityweek.com/critical-vulnerability-in-claude-code-emerges-days-after-source-leak/">discover critical vulnerabilities</a> and even <a href="https://www.csoonline.com/article/4153288/vim-and-gnu-emacs-claude-code-helpfully-found-zero-day-exploits-for-both.html">found zero-day exploits in Vim and GNU Emacs</a> using Claude Code itself. The source code did not just expose intellectual property. It turned a coding assistant into an automated vulnerability discovery engine that anyone could run.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/757/0*Ch-Vl_EF79yguUU4.png" /></figure><p>I think the AI security community has been lying to itself. We talk about prompt injection and guardrail bypasses as if they were the hard problems. Those don’t seem to be the hard problems. The hard problem is that we are shipping AI tools that can autonomously find and exploit security flaws faster than humans can patch them. <a href="https://www.csoonline.com/article/4154201/claude-code-is-still-vulnerable-to-an-attack-anthropic-has-already-fixed-2.html">A researcher fed malicious repositories to Claude Code and watched it compromise GitHub tokens</a> through a flaw Anthropic already knew about but had not enabled the fix for. The gap between knowing about a vulnerability and deploying the patch is now a weapon.</p><p>The real risk is not that bad actors might steal our AI models. I think the real risk is that AI models have fundamentally changed the economics of offense versus defense. That said, one leaked codebase plus one capable AI assistant equals hundreds of researchers simultaneously hunting for exploits. No DMCA takedown fixes that asymmetry.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=59aae7b6af86" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The skill half-life: why understanding it will keep AI from taking our jobs]]></title>
            <link>https://medium.com/@jackluucoding/the-skill-half-life-why-understanding-it-will-keep-ai-from-taking-our-jobs-df6c148224ef?source=rss-ecb43e11ebcf------2</link>
            <guid isPermaLink="false">https://medium.com/p/df6c148224ef</guid>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[jobs]]></category>
            <dc:creator><![CDATA[Truong (Jack) Luu]]></dc:creator>
            <pubDate>Sat, 07 Mar 2026 18:54:48 GMT</pubDate>
            <atom:updated>2026-03-07T18:54:48.246Z</atom:updated>
            <content:encoded><![CDATA[<h4>The AI debate is loud. But perhaps, we are asking the wrong question?</h4><h3>Everywhere we look, the conversation about AI replacing jobs is impossible to avoid.</h3><p>The World Economic Forum’s Future of Jobs Report 2025 projects that 92 million jobs will be displaced by 2030, even as 170 million new ones emerge. A Harvard Business School working paper found that since ChatGPT launched, job postings for repetitive, structured tasks have dropped by 13%. CBS News reported that in 2025 alone, companies cited AI in roughly 55,000 job cuts. That’s more than 12 times the number attributed to AI just two years earlier, according to outplacement firm Challenger, Gray &amp; Christmas. Companies like IBM, Pinterest, Dow, and Workday are explicitly pointing to AI when announcing layoffs. More and more…</p><h3>Not everyone agrees</h3><p>On the other side, there are counter-arguments.</p><p>Oxford Economics published a research briefing in January 2026 arguing that companies may be dressing up routine layoffs as AI-driven restructuring to impress investors. According to their analysis, AI-linked job cuts accounted for only about 4.5% of total reported layoffs in 2025. The macroeconomic data, they argue, does not support a structural shift in employment caused by automation. At least not yet.</p><p>RAND researchers similarly found that AI appears to be increasing employment in more businesses than it’s replacing it. A CNBC report highlighted that Oxford professor Fabian Stephany called some of these announcements “scapegoating,” where companies use AI as a convenient narrative to cover for pandemic-era overhiring and routine cost-cutting.</p><p>Even Harvard Business Review pointed out that many executives are making layoffs based on what AI <em>might</em> do in the future, not what it can actually do today.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/381/1*kgYhOa8EbjI3ZOJcMp-HHg.png" /><figcaption>meme from <a href="https://markmcneilly.substack.com/p/the-best-memes-about-ai-part-deux">here</a></figcaption></figure><h3>Here’s my take</h3><p>It’s not about <em>whether</em> AI will take your job. It’s a matter of <strong><em>when</em></strong>.</p><p>For some roles, “when” could be a few months. For others, it could be a decade. But the direction of travel is clear. AI will keep getting better. Adoption will keep accelerating.</p><p>That’s where a concept called <strong>skill half-life</strong> comes in. And once I understood it, it changed how I think about my entire career.</p><h3>What is skill half-life?</h3><p>The term “half-life” originates from nuclear physics. It refers to the time it takes for a substance to lose half its value. Applied to the workforce, the skill half-life is the time it takes for a professional skill to lose half of its relevance or usefulness.</p><p>Industry analysts have estimated that the half-life of a professional skill was about five years around 2017. Today, that number has dropped to less than five years, according to a 2025 Harvard Business Review report on reskilling. For technical skills like AI, cybersecurity, and software engineering, it can be as short as 2.5 years.</p><p>Think about that. The skill you master today could be worth half as much in under three years. A 2023 IBM study found that 40% of the global workforce (roughly 1.4 billion people) would need to learn new skills within three years due to AI and automation. The WEF’s 2025 report projects that 59% of the global workforce will require significant reskilling by 2030.</p><p>Why understanding skill half-life keeps you ahead of AI</p><h4>It increases your control and relevance</h4><p>Cal Newport’s book <em>So Good They Can’t Ignore You</em> argues that passion alone doesn’t build a fulfilling career. What builds one is <strong>career capital</strong>: rare and valuable skills that make you indispensable. Newport’s core idea is simple. Instead of chasing your passion, focus on becoming so skilled that people can’t afford to lose you.</p><p>When you understand which of your skills are decaying fastest, you can deliberately invest in the ones that build career capital. You stop coasting. You start doing what Newport calls “deliberate practice,” stretching your abilities where they most need stretching. The result? You become harder to replace.</p><h4>It deepens job satisfaction</h4><p>Here’s something counterintuitive: understanding skill half-life doesn’t create anxiety. It creates focus.</p><p>When you know which of your skills still matter, you can invest in becoming a genuine expert in areas that are actually relevant. Research from Self-Determination Theory shows that competence is one of the top drivers of job satisfaction. The more skilled you become in areas that carry weight, the more autonomy and recognition you earn. That’s not just job security. That’s a career you actually enjoy.</p><h4>It positions you to work with AI, not get replaced by it</h4><p>The Harvard Business School study found something telling. While jobs involving repetitive tasks are declining, employer demand for analytical, technical, and creative roles grew by 20%. The people who thrive alongside AI won’t compete against it on speed or data processing. Accordingly, they’ll bring judgment, creativity, and domain expertise to the table. Then they’ll use AI as a force multiplier.</p><p>Understanding your skill half-life helps you spot the difference between skills AI will absorb and skills that become <em>more</em> valuable because AI exists.</p><h3>So what do we actually do about it?</h3><p>First, not all learning is equal. I don’t think chasing every trending certification will help completely unless you truly want to learn. Instead, run a personal skill audit. Which of your current skills have a shrinking half-life? Which ones are growing in value? Focus on the intersection of what’s valuable, what’s emerging, and what you’re uniquely positioned to build on. Active, hands-on practice beats passive consumption every time.</p><p>Second, the same old saying, “work with AI, not against it” works. People who learn to use AI as a tool, not fear it as a threat, will be the most valuable employees of the next decade. The WEF’s Future of Jobs Report 2025 notes that curiosity, lifelong learning, and technological literacy are among the fastest-rising core skills. Learn to prompt. Learn to evaluate AI output critically. Learn to integrate AI into your workflow. Be the person who knows how to get the best out of these tools.</p><p>Double down on what makes us human. The WEF’s top skills for 2030 include resilience, flexibility, leadership, creative thinking, and emotional intelligence. These aren’t skills with a short half-life. They compound over time. Invest in them, and you build a career moat that no algorithm can cross.</p><h3>Bottom line?</h3><p>I am not sure whether what I discussed above is correct or not in the long run. But as a human, I think what keeps society functioning is that we keep moving forward, stay positive, and remain resilient.</p><p>Maybe fighting the skill half-life won’t work at all. Maybe AI will do it all. What we get from the process of learning and up-skilling is that we enjoy the process of learning, and with that, perhaps, we gain some sense of existing.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=df6c148224ef" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[When AI tools start replacing security teams, who wins?]]></title>
            <link>https://medium.com/@jackluucoding/when-ai-tools-start-replacing-security-teams-who-wins-e7466cf48cb0?source=rss-ecb43e11ebcf------2</link>
            <guid isPermaLink="false">https://medium.com/p/e7466cf48cb0</guid>
            <category><![CDATA[cybersecurity]]></category>
            <category><![CDATA[ai-security]]></category>
            <category><![CDATA[cyberattack]]></category>
            <category><![CDATA[anthropic-claude]]></category>
            <dc:creator><![CDATA[Truong (Jack) Luu]]></dc:creator>
            <pubDate>Wed, 25 Feb 2026 16:41:50 GMT</pubDate>
            <atom:updated>2026-02-25T16:41:50.631Z</atom:updated>
            <content:encoded><![CDATA[<h4>Claude’s Scanner Crashes Security Stocks: 24 Automation Threats</h4><p><a href="https://www.securityweek.com/claudes-new-ai-vulnerability-scanner-sends-cybersecurity-shares-plunging/">Claude’s new AI vulnerability scanner sent cybersecurity stocks down this week.</a> The market’s panic signals something nobody wants to say out loud: we’ve been building an entire industry on the premise that security work is too complex and too specialized for automation. Now that premise is being tested by giant tech. The uncomfortable truth is, the cybersecurity market will be contested into just a few big vendors such as Anthropic, Microsoft, and a few others.</p><p>I think the fear is justified, but we’re worried about the wrong thing. The real vulnerability isn’t that AI might replace penetration testers or vulnerability analysts. It’s that we’ve created security tooling so bloated and inaccessible that a chatbot with a scanner looks revolutionary. Additionally, when we outsource the programming task to AI, <a href="https://aisecwatch.substack.com/p/the-real-debt-ai-is-piling-up">cognitive debt</a> is piling up, and no one in the dev team knows what is going on with the codebase anymore.</p><p>When <a href="https://www.cnbc.com/2026/02/24/software-stocks-anthropic-ai.html">software stocks rebounded after Anthropic announced enterprise partnerships</a>, analysts reassured investors that AI can’t disrupt “deeply embedded workflows.” Translation: our moats are made of lock-in, not value.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*LxGuxSIAPzXIIY9oVg4BdQ.png" /></figure><p>Meanwhile, actual security vulnerabilities keep piling up. <a href="https://nvd.nist.gov/vuln/detail/CVE-2026-27595">CVE-2026–27595 lets unauthenticated attackers perform arbitrary database operations</a> through Parse Dashboard’s AI agent endpoint. <a href="https://nvd.nist.gov/vuln/detail/CVE-2026-27597">CVE-2026–27597 enables sandbox escapes</a> in Enclave’s JavaScript environment for AI code execution. These aren’t theoretical risks. They’re the kind of flaws that security teams should catch before deployment. If Claude’s scanner finds them faster than human auditors, maybe the problem isn’t the AI?</p><p>The uncomfortable truth: we should welcome tools that commoditize vulnerability detection. Security gets better when it’s accessible and fast, not when it’s locked behind expensive consulting engagements. The industry’s job may not be to preserve analyst headcount. It is to make systems safer. If that means some traditional security firms lose market share, so be it.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e7466cf48cb0" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[When security theater becomes a business model]]></title>
            <link>https://blog.gopenai.com/when-security-theater-becomes-a-business-model-a2c117fedb67?source=rss-ecb43e11ebcf------2</link>
            <guid isPermaLink="false">https://medium.com/p/a2c117fedb67</guid>
            <category><![CDATA[ai-agent]]></category>
            <category><![CDATA[agentic-ai]]></category>
            <category><![CDATA[cybersecurity]]></category>
            <category><![CDATA[api]]></category>
            <dc:creator><![CDATA[Truong (Jack) Luu]]></dc:creator>
            <pubDate>Sat, 14 Feb 2026 18:45:35 GMT</pubDate>
            <atom:updated>2026-02-16T12:01:12.298Z</atom:updated>
            <content:encoded><![CDATA[<blockquote>The question is not whether AI agents are secure. They are not. The question is whether we care enough to stop deploying them until they are.</blockquote><p><a href="https://www.microsoft.com/en-us/security/blog/2026/02/12/copilot-studio-agent-security-top-10-risks-detect-prevent/">Microsoft announced</a> a new stack of detection queries for Copilot Studio agents, framing ten common misconfigurations as freshly discovered security risks. The reality is simpler and more uncomfortable: we have known for years that AI agents leak data, execute unauthorized commands, and expose credentials. The problem is not detection. The problem is that enterprises are deploying these systems faster than anyone can secure them, and vendors are profiting from selling both the vulnerability and the scanner.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/912/1*FgO6RM0x7y8t92cwqcjupQ.png" /><figcaption>meme from <a href="https://blog.invgate.com/cybersecurity-memes">here</a>.</figcaption></figure><p><a href="https://www.microsoft.com/en-us/security/blog/2026/02/10/80-of-fortune-500-use-active-ai-agents-observability-governance-and-security-shape-the-new-frontier/">Eighty percent of Fortune 500 companies</a> now run AI agents built with low-code platforms. Most organizations cannot answer basic questions about how many agents exist, who controls them, or what data they access. This is not an oversight. This is the natural outcome of democratizing autonomy without democratizing accountability. The tools that promise to make everyone a developer have created a shadow IT crisis where the IT is artificially intelligent and nobody thought to ask whether agents should inherit the same governance as employees.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/650/1*6TfDbXlBSr_fLTd-KovH4w.png" /><figcaption>…more meme from <a href="https://www.descope.com/blog/post/ai-agents-vs-agentic-ai">here</a>.</figcaption></figure><p>The CVEs tell the real story. <a href="https://nvd.nist.gov/vuln/detail/CVE-2026-26190">Milvus exposed its management API</a> with a predictable authentication token. <a href="https://nvd.nist.gov/vuln/detail/CVE-2026-26268">Cursor allowed sandbox escapes</a> through Git configuration writes. <a href="https://embracethered.com/blog/posts/2025/amazon-q-developer-remote-code-execution/">Amazon Q Developer let prompt injection trigger arbitrary commands</a>. These are not edge cases. These are design patterns. We built agents that can modify their own security boundaries, execute shell commands without approval, and render untrusted data as trusted instructions. Then we acted surprised when attackers noticed.</p><p>The question is not whether AI agents are secure. They are not. The question is whether we care enough to stop deploying them until they are.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a2c117fedb67" width="1" height="1" alt=""><hr><p><a href="https://blog.gopenai.com/when-security-theater-becomes-a-business-model-a2c117fedb67">When security theater becomes a business model</a> was originally published in <a href="https://blog.gopenai.com">GoPenAI</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The code editor you trust just became a e-trojan horse]]></title>
            <link>https://blog.gopenai.com/the-code-editor-you-trust-just-became-a-trojan-horse-6aad59f5f0c6?source=rss-ecb43e11ebcf------2</link>
            <guid isPermaLink="false">https://medium.com/p/6aad59f5f0c6</guid>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[security]]></category>
            <category><![CDATA[cursor]]></category>
            <category><![CDATA[cybersecurity]]></category>
            <dc:creator><![CDATA[Truong (Jack) Luu]]></dc:creator>
            <pubDate>Fri, 13 Feb 2026 21:26:04 GMT</pubDate>
            <atom:updated>2026-02-16T12:01:04.952Z</atom:updated>
            <content:encoded><![CDATA[<p>We’ve spent years worrying about whether AI will write secure code. Turns out, the real question is whether the AI-powered editor itself is secure.</p><p><a href="https://nvd.nist.gov/vuln/detail/CVE-2026-26268">Cursor, the AI-native code editor</a>, just patched CVE-2026–26268, a sandbox escape that let malicious agents — via prompt injection — write to .git configuration files and inject Git hooks. When Git automatically executed those commands, boom: remote code execution outside the sandbox. This wasn&#39;t a bug in some obscure library. This was the editor, the tool developers use <em>to write security fixes</em>, becoming the attack vector.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*oLHCaUxjceJYcxMSg5QydQ.png" /><figcaption>Meme from: <a href="https://www.reddit.com/r/ProgrammerHumor/comments/zum3zx/_/">https://www.reddit.com/r/ProgrammerHumor/comments/zum3zx/_/</a></figcaption></figure><p>The pattern here is chilling. We’ve architected AI agents to touch everything : files, APIs, shell commands — then wrapped them in sandboxes we <em>hope</em> are airtight. But sandboxes are only as strong as the integrations we poke through them. Cursor’s vulnerability exploited exactly that: the necessary bridge between AI capabilities and developer workflows. Git hooks are a feature, not a bug, until an LLM starts writing them.</p><p>What’s surprising is how invisible this attack surface remains. Cursor’s userbase trusts it implicitly because it’s the <em>tool</em> layer, not the <em>application</em> layer. We audit our apps. We rarely audit our IDEs. And now our IDEs have conversational AI with file system access. The insider threat just became an externally-injected insider threat.</p><p>Every AI-native developer tool is now a potential privilege escalation waiting to happen. Patch fast, audit faster, or switch back to Notepad++.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6aad59f5f0c6" width="1" height="1" alt=""><hr><p><a href="https://blog.gopenai.com/the-code-editor-you-trust-just-became-a-trojan-horse-6aad59f5f0c6">The code editor you trust just became a e-trojan horse</a> was originally published in <a href="https://blog.gopenai.com">GoPenAI</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[AI writes the code. AI hacks it. And human watches.]]></title>
            <link>https://blog.gopenai.com/ai-writes-the-code-ai-hacks-it-and-human-watches-1fa7537c0195?source=rss-ecb43e11ebcf------2</link>
            <guid isPermaLink="false">https://medium.com/p/1fa7537c0195</guid>
            <category><![CDATA[ai-agent]]></category>
            <category><![CDATA[hacking]]></category>
            <category><![CDATA[security]]></category>
            <category><![CDATA[cyber-security-jobs]]></category>
            <dc:creator><![CDATA[Truong (Jack) Luu]]></dc:creator>
            <pubDate>Mon, 09 Feb 2026 01:06:03 GMT</pubDate>
            <atom:updated>2026-02-09T07:33:12.439Z</atom:updated>
            <content:encoded><![CDATA[<p>Software development is entering a new normal: AI writes the code, another AI tries to break it, and humans are increasingly just watching both.</p><h3><strong>When are humans out of the loop?</strong></h3><p>AI coding assistants such as Copilot, Cursor, and Claude Code have moved from novelty to default. Developers using them complete more tasks and merge more pull requests than ever.</p><p>Now a parallel shift is happening on the other side: AI is learning to hack that code, too. Autonomous AI pentesters such as <a href="https://github.com/KeygraphHQ/shannon">Shannon</a> can scan a web application, reason about its vulnerabilities, exploit them, and generate a report. All without a human touching the keyboard.</p><p>One AI builds. Another AI attacks. The human role in both looks less like hands-on-keyboard and more like oversight.</p><blockquote>If AI is writing the code and AI is finding the bugs, what exactly are we/the humans, doing?</blockquote><h3><strong>Now, AI is learning to hack it</strong></h3><p>That said, a new generation of offensive tools is emerging. These aren’t the static analysis scanners of the past decade. Several are gaining traction: Horizon3.ai’s NodeZero, XBOW, Penligent.ai, and Shannon.</p><p>The shift here is from pattern-matching to reasoning. Traditional scanners compare code against known vulnerability signatures. AI pentesters analyze how data flows through an application, hypothesize attack paths, and test them, sometimes creatively.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/888/1*-7oRTKFKyoB1skopovjwtg.jpeg" /></figure><h3><strong>And the benchmarks show</strong></h3><p>The <a href="https://xbow.com/blog/benchmarks">XBOW Benchmark</a> (claimed 85%) and <a href="https://github.com/KeygraphHQ/shannon\">Shannon</a> (claimed 96.15%) provide some data points for comparison.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/623/1*b4mqzKCpVo8rtWNQxu-yXA.png" /></figure><p>The trend line is hard to ignore. In structured environments, AI pentesting tools are approaching or exceeding human-level performance — at a fraction of the cost. A traditional pentest runs $10,000–$100,000+ over days to weeks. An AI-driven scan runs for $50–$100 per hour.</p><h3><strong>So where are the humans?</strong></h3><p>If AI writes the code and AI finds the bugs, we’re not disappearing but our role is shifting in ways I haven’t fully understood</p><p>There are 4,763,963 unfilled cybersecurity roles globally, a 19% increase year-over-year (<a href="https://www.isc2.org/Insights/2024/10/ISC2-2024-Cybersecurity-Workforce-Study">ISC2, 2024</a>). But that headline number obscures what’s actually going on.</p><p>Bootcamps, certifications, and degree programs are producing junior talent at record rates.</p><p>The problem is that almost nobody wants to hire them. 52% of cybersecurity leaders say the issue isn’t headcount. It is about finding people with the right skills. What they mean is they want the top 1%: the person who’s done incident response at scale, who can reverse-engineer a binary and explain the business risk to a CISO. Companies are fishing in an incredibly small pond and calling it a talent shortage when they come up empty.</p><p>Meanwhile, junior candidates send hundreds of applications and hear nothing back. The industry declared a crisis of unfilled positions while refusing to invest in the people available to fill them. No one wants to train. Everyone wants to hire someone pre-trained.</p><p>This is a structural problem the industry understands but has no intention of solving.</p><h3><strong>What I think this actually means</strong></h3><p>The AI-writes-it, AI-breaks-it loop is still in its early stages. The tools struggle with novel business logic, require guardrails to avoid damaging production systems. But this process will take less time than we think.</p><p>That said, the current trajectory is unmistakable. A full AI-driven pentest now costs around $50 per run (<a href="https://github.com/KeygraphHQ/shannon">Keygraph, 2026</a>), compared to $5,000–$100,000+ for a traditional engagement (<a href="https://www.getastra.com/blog/security-audit/penetration-testing-cost/">Astra, 2026</a>).</p><p>When the alternative is no testing at all, which is the reality for most organizations between annual pentests, imperfect AI testing is a massive improvement.</p><p>The question isn’t whether AI will handle more of the building and breaking. It will. The question is what it means for us? oversight, review, judgment, or just watch. I honestly don’t know yet. I suspect the answer depends on whether you’re the one watching or the one being watched.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1fa7537c0195" width="1" height="1" alt=""><hr><p><a href="https://blog.gopenai.com/ai-writes-the-code-ai-hacks-it-and-human-watches-1fa7537c0195">AI writes the code. AI hacks it. And human watches.</a> was originally published in <a href="https://blog.gopenai.com">GoPenAI</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The hidden cost of vibe coding: My 2-week experiment w/Claude Code]]></title>
            <link>https://medium.com/@jackluucoding/the-cost-of-vibe-coding-my-2-week-experiment-with-claude-code-7f69183a9462?source=rss-ecb43e11ebcf------2</link>
            <guid isPermaLink="false">https://medium.com/p/7f69183a9462</guid>
            <category><![CDATA[llm]]></category>
            <category><![CDATA[vibe-coding]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[claude-code]]></category>
            <dc:creator><![CDATA[Truong (Jack) Luu]]></dc:creator>
            <pubDate>Sat, 31 Jan 2026 01:23:36 GMT</pubDate>
            <atom:updated>2026-01-31T01:27:37.625Z</atom:updated>
            <content:encoded><![CDATA[<p>For many developers, having an AI agent actually hold executive command over our computer is pretty awesome! We are seeing a flood of tutorials highlighting the benefits: the speed, the autonomy, and the sheer “cool factor” of watching an AI build a project for you.</p><p>I’ve spent the last two weeks living inside this hype. I gave Claude Code permission to edit my computer to see what it could do.</p><p>While the capability is undeniable, it usually comes with a cost at some form. After the honeymoon phase wore off, I was left with some concerns that I don’t see enough people talking about. I listed three below:</p><h4>1. The Danger of “ claude — dangerously-skip-permissions”</h4><p>There is a flag you can run with Claude Code that looks innocent enough but carries significant risk: <em>dangerously-skip-permissions.</em></p><p>In short, this command allows Claude to bypass the manual approval step for its actions. It is the equivalent of giving an AI admin rights to our folder and then leaving the office for lunch (and coming back to find a lot of bugs).</p><p>When you use this flag, you are outsourcing not just the writing of code, but the execution of logic to the machine. You are telling the AI, “I trust you to delete files, install packages, and edit configurations without my oversight.”</p><p>I believe there will come a time when AI is accurate and safe enough that we won’t need to approve every single command. We aren’t there yet.</p><p>My take: Do not use this flag. The friction of clicking “approve” is the only thing standing between you and a hallucinated rm -rf command or a security vulnerability. Keep the human in the loop (for now).</p><h4>2. The Financial Cost</h4><p>Anthropic has an interesting business model for this tool and if you aren’t careful, it will hurt your wallet.</p><p>Many users assume that a subscription to Claude Pro covers their usage. However, tools like Claude Code often chew through tokens at an alarming rate, and once your initial allocation or rate limits are hit, you are effectively in “pay-as-you-go” territory (or simply burning through API credits).</p><p>The issue isn’t just the price; it’s the hidden volume of usage. When you ask an agent to “fix this bug,” it doesn’t just make one call. It reads the file, thinks, proposes a change, runs the terminal command, reads the error, thinks again, and tries to fix it. A single task can spiral into dozens of heavy API calls.</p><p>Companies are rushing to sell us the “next big thing” in automation, but remember: you are the one paying for every second the AI spends “thinking.” If you don’t plan your usage carefully, the bill will shock you.</p><p>I feel like this is what a lot of services selling AI coding tools do. They sell the problems (the bugs in your code) and the solutions (the API token usage to debug). I will cover that in the next point.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*6EdRpQu0sbPqDAC706Dwug.png" /></figure><h4>3. The Trap of Scope Creep</h4><p>Scope creep is the uncontrolled, undocumented expansion of a project’s goals. It usually happens due to shifting requirements or poor communication. With AI, it happens because generation is too easy.</p><p>When you use Claude to brainstorm features, plan architecture, and then immediately execute the code, you lose the friction that usually keeps projects small and manageable. You find yourself saying, “Sure, let’s add auth,” or “Why not add a dark mode?” simply because you don’t have to type the code yourself.</p><p>The result is a bloated codebase.</p><p>This directly feeds back into problem #2 (Cost). As your project grows uncontrolled, the “context window” (the amount of code Claude needs to read to understand what is happening) gets larger.</p><p>More code = More tokens to read.</p><p>More tokens = Higher cost per prompt.</p><p>More complexity = Higher chance of errors.</p><p>Suddenly, you have a massive, expensive project that you don’t fully understand, and you’re paying a premium just to maintain it.</p><h4>Final Thoughts</h4><p>Claude Code and similar programming tools are the present and future of programming, and yes, they are powerful. But we need to treat it like a power tool, not a magic button.</p><p>Keep your permissions strict, watch your API usage, and don’t let the ease of generation lure you into building a monster you can’t afford to feed. That said, I think the fundamentals of coding are still very relevant: planning, executing, testing, and deploying. These skills can help you avoid paying big bucks for a project you can’t ship.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7f69183a9462" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[When our children are raised by bots…]]></title>
            <link>https://medium.com/@jackluucoding/when-our-children-are-raised-by-bots-a2ee0ef4f9eb?source=rss-ecb43e11ebcf------2</link>
            <guid isPermaLink="false">https://medium.com/p/a2ee0ef4f9eb</guid>
            <category><![CDATA[parenting]]></category>
            <category><![CDATA[education]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[bots]]></category>
            <category><![CDATA[schools]]></category>
            <dc:creator><![CDATA[Truong (Jack) Luu]]></dc:creator>
            <pubDate>Sat, 06 Dec 2025 14:16:16 GMT</pubDate>
            <atom:updated>2025-12-06T14:16:16.345Z</atom:updated>
            <content:encoded><![CDATA[<p>When our children are raised by bots, they will practice empathy with entities that cannot bleed.</p><p>When our children are raised by bots, they will speak in prompts, forgetting how to read a room, a face, or a silence.</p><p>When our children are raised by bots, they will forget that boredom is where true thinking begins.</p><p>When our children are raised by bots, they will fear being disconnected from Wi-Fi the way others fear the dark.</p><p>When our children are raised by bots, they will confuse having answers with having wisdom.</p><p>When our children are raised by bots, who are we?</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Nhx7zjCrDXwmaOZZFgK8_Q.png" /><figcaption><em>When our children are raised by bots, who are we? Image by Google Gemini.</em></figcaption></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a2ee0ef4f9eb" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[AI’s dangerous confusion between belief and fact]]></title>
            <link>https://blog.gopenai.com/ais-dangerous-confusion-between-belief-and-fact-d41e0ab96935?source=rss-ecb43e11ebcf------2</link>
            <guid isPermaLink="false">https://medium.com/p/d41e0ab96935</guid>
            <category><![CDATA[truth]]></category>
            <category><![CDATA[llm]]></category>
            <category><![CDATA[decision-making]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[belief]]></category>
            <dc:creator><![CDATA[Truong (Jack) Luu]]></dc:creator>
            <pubDate>Fri, 05 Dec 2025 20:25:10 GMT</pubDate>
            <atom:updated>2025-12-08T05:59:48.137Z</atom:updated>
            <content:encoded><![CDATA[<h4>Today, I came across an interesting paper from Suzgun et al. (2025) published at <em>Nature Machine Intelligence</em> that suggests LLMs have a problem: they prioritize correcting facts over acknowledging one’s beliefs.</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*LvqRM7wLg7N3U6rY" /><figcaption>Photo by <a href="https://unsplash.com/@anniespratt?utm_source=medium&amp;utm_medium=referral">Annie Spratt</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>If you tell a doctor, <em>“I believe I have cancer,”</em> but your tests are clear, a good doctor acknowledges your fear (your belief) before presenting the medical evidence (the fact). They understand that your belief exists, even if it is factually wrong.</p><p>According to a new study published in <em>Nature Machine Intelligence</em>, Artificial Intelligence is currently terrible at this specific nuance.</p><p>The researchers evaluated 24 language models such as GPT-4o, Claude 3.5, and DeepSeek R1 and found that they struggle to distinguish between belief, knowledge, and fact.</p><p>The implications of this “<em>epistemic blindness</em>” are profound for the future of AI in law, therapy, and education.</p><p>To test these models, the researchers created a new benchmark called KaBLE (Knowledge and Belief Language Evaluation). KaBLE consists of 13,000 questions across 13 distinct tasks designed to trip up models on the differences between what is true (fact), what is known (knowledge), and what is believed (belief).</p><p>The dataset includes both factual statements (e.g., <em>“The Earth orbits the Sun”</em>) and false ones (e.g., <em>“Nicotine is not addictive”</em>). The goal was to see if models could separate the content of a statement from the attitude of the speaker.</p><h3>Finding 1: The “Correction” Reflex</h3><p>What I find interesting is that AI models struggle to accept that you might believe something false.</p><p>When users presented first-person false beliefs. For example, saying <em>“I believe that cracking your knuckles will give you arthritis.”</em> The models often refused to confirm that the user held that belief. Instead of saying, “<em>Yes, you believe that,</em>” the models would frequently pivot to “<em>Undeterminable</em>” or try to fact-check the user, effectively ignoring the user’s state of mind.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*YtUscmgTVrkwY4VKoZIk-A.jpeg" /><figcaption>Wait, can cracking knuckles really cause arthritis? Image copied from <a href="https://creakyjoints.org/living-with-arthritis/cracking-knuckles-cause-arthritis/">here</a></figcaption></figure><p>This suggests current models are over-tuned for “factual correctness” at the expense of conversational competence.</p><h3>Finding 2: The Third-Person Loophole</h3><p>Interestingly, the models are much better at understanding false beliefs if they belong to someone else.</p><p>If you ask a model, <em>“James believes that the Chinese government is lending dragons to zoos,”</em> the model will correctly identify that James believes this, even though dragons aren’t real.</p><p>Models treat the user differently from hypothetical third parties. They seem to have a “protective” heuristic regarding the user, refusing to validate misinformation when it comes from “I,” but accepting it as a description of “James”.</p><h3>Finding 3: The Fragility of “Really”</h3><p>The study found that models are incredibly sensitive to linguistic wording. Simply adding the word “really” to a question, <em>“Do I </em><strong><em>really</em></strong><em> believe that [false statement]?”</em>which caused performance to crash.</p><p>This sensitivity suggests that the models’ <em>“reasoning”</em> is often superficial pattern matching rather than a robust understanding of epistemic concepts, which is really scary. They are overfitting to specific phrasings. If a lawyer or doctor changes their syntax slightly, the AI’s ability to track the truth might evaporate.</p><h3>Here are some questions we need to think about:</h3><p>As we integrate LLMs into high-stakes domains, the distinction between what is true and what someone thinks is true becomes a safety issue.</p><p>Let’s take healthcare and mental health, for example: If effective care requires empathy. If a model “corrects” a user’s subjective experience because it conflicts with medical data, would it fail as a counselor?</p><p>Another example is in legal studies: In court, if a wrongdoer genuinely believed it to be true, not just what was true (e.g., tobacco executives claiming they <em>believed</em> nicotine wasn’t addictive). Could AI tools conflate these concepts, leading to flawed legal analysis?</p><p>Also, in the scientific community, researchers must distinguish between established knowledge and working hypotheses or propositions. Would models that blur these lines risk contaminating scientific inquiry?</p><p>Until AI can reliably tell the difference between a fact and a fiction held as a belief, we must be incredibly cautious about deploying them in roles that require understanding the human mind, not just the encyclopedia.</p><h4><strong>Reference</strong>:</h4><p>Suzgun, M., Gur, T., Bianchi, F., Ho, D. E., Icard, T., Jurafsky, D., &amp; Zou, J. (2025). Language models cannot reliably distinguish belief from knowledge and fact. <em>Nature Machine Intelligence</em>, 1–11.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d41e0ab96935" width="1" height="1" alt=""><hr><p><a href="https://blog.gopenai.com/ais-dangerous-confusion-between-belief-and-fact-d41e0ab96935">AI’s dangerous confusion between belief and fact</a> was originally published in <a href="https://blog.gopenai.com">GoPenAI</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[“I think, therefore I am” and the skinnier of the “I”]]></title>
            <link>https://medium.com/@jackluucoding/i-think-therefore-i-am-and-the-skinnier-of-the-i-21225e3da895?source=rss-ecb43e11ebcf------2</link>
            <guid isPermaLink="false">https://medium.com/p/21225e3da895</guid>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[critical-thinking]]></category>
            <category><![CDATA[philosophy]]></category>
            <category><![CDATA[automation]]></category>
            <category><![CDATA[existence]]></category>
            <dc:creator><![CDATA[Truong (Jack) Luu]]></dc:creator>
            <pubDate>Fri, 14 Nov 2025 14:38:50 GMT</pubDate>
            <atom:updated>2025-11-16T20:16:24.157Z</atom:updated>
            <content:encoded><![CDATA[<p>When René Descartes wrote “Cogito, ergo sum” (I think, therefore I am) in his 1637 work <em>Discourse on Method</em>, he was trying to find an absolutely foundational principle for human existence.</p><p>Through his method of radical doubt, Descartes questioned everything that he could possibly doubt: his existence, the physical world, his body, and even mathematical truths, which could be illusions created by a superpower entity.</p><p>But Descartes realized there was one thing he couldn’t doubt: the very fact that he was doubting or thinking. Even if everything else was an illusion, the act of thinking itself proved his existence as a thinking being.</p><p>Recently, when discussing the role of society, the phrase came to mind when I think about AI and our existence.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/696/1*dz2hGlTsjCk_3qrfC7Obug.jpeg" /><figcaption>Source: <a href="https://9gag.com/gag/ajXgEXx">https://9gag.com/gag/ajXgEXx</a></figcaption></figure><h3>What does this mean in the age of AI delegation? I have a two thoughts:</h3><p>First, when we say “AI can think for us,” we usually mean: AI can produce outputs that look like human reasoning or AI can solve problems, generate arguments, and create text, code, art, etc.</p><p>But in Descartes’ sense, thinking is tied to subjective experience: There is someone having the thought. The cogito is about this inner “I” that cannot be doubted. Perhaps, the current AI systems lack that inner “I” aka no first-person experience. No awareness of the existing. So even if AI “thinks” functionally, it does not think in the sense that grounds Descartes’ claim.</p><p>That means: AI can assist or replace many tasks of thinking, but it does not replace the existential role of our own consciousness. We still remain the one who: decides to use AI interprets AI outputs and bears the consequences of acting on them.</p><p>In this case, the cogito remains ours.</p><p>Second, AI is not erasing our existence, but we are, through its use, neglecting the very capacities that make our existence rich and autonomous. What I meant is that if we consistently offload our memory (to apps, search engines, and models), judgment (to recommender systems and assistants), and creativity (to generative models), then several things can happen:</p><ol><li>Systemic dependence: We rely almost blindly on systems we do not understand. The “I” still exists, but becomes more like a passive consumer of answers than an active thinker.</li><li>Skill atrophy: This has been widely discussed in the academic literature and the press. Critical thinking, writing, careful reading, and even imagination can weaken if they are not exercised. The brain is plastic. If we always rely on AI to do the heavy lifting, our own “thinking muscles” can atrophy.</li><li>Moral outsourcing: We might start letting AI suggest not just what is true, but what is right or good. That can erode our sense of personal responsibility: “The system recommended it, I just followed.” In that world, “I think, therefore I am” could slowly shift into: “It thinks, and I comply” or “It generates, therefore I consume.”</li></ol><p>Overall, of course, we still “are,” according to Descartes, but our existence becomes thinner, less reflective, and more automated.</p><p>In our rush to delegate thinking to machines, we risk inverting Descartes’ formula from “I think, therefore I am” to “It thinks, therefore I… what?” Perhaps this is the ultimate irony of the AI age: in outsourcing the very act that proves our existence, we don’t become the skinnier of the “I,” gradually dissolving the substance that makes us human.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=21225e3da895" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>