AI

The Man With No Brains

 Published: Feb 5, 2026~ 900 words~ 4 minutes reading time

I actually enjoy setting up a new coding project from scratch - those first couple of hours where you go from nothing to something, even if it’s just poorly laid out HTML and CSS, or a bunch of console.log statements.

Recently though, I’ve found myself unable to approach any new task without AI’s help.

Like an addiction, it became a trap. I couldn’t break free from the cycle of constant stimulation and instant gratification these tools provided. Instead of being a useful helper, AI became the process itself - I stopped starting with code and using AI to work through ideas, and instead started with AI and used it to generate the implementation.

How it started

When they first became available, AI coding agents felt novel and exciting - like a new tool in my toolbox that helped me realise ideas in completely new domains or languages, at a speed I wasn’t capable of before.

One example is Teskooano - my personal project to build a 3D space engine, something I’d wanted to do for years but never found time to realise. I discovered I could go from idea to working prototype in hours, achieving things that seemed beyond my reach. It became “just one more turn” - I’d see opportunities to add features, and instead of waiting for my slow meatbrain to implement them, they’d be done in minutes.

But now it’s a complex mess. Even today, I still haven’t actually learned how to write WebGL shaders or understood the inner workings of ThreeJS.

Yes, AI helped me build the project but it robbed me of the joy of learning, and the satisfaction of finally understanding how things work. In less than two years, I’ve become more dependent on AI than I could have imagined.

How it’s going

I now struggle to connect dots that previously weren’t a problem in my day-to-day work. No one forced me to do this - I have no top-down executive orders mandating AI use. This situation is wholly of my own making.

But companies have spoken. Many of us now work in environments where we’re constantly expected to deliver new features while being told there’s no budget to grow teams (in fact, we have to cut engineers). AI creates a mirage of productivity - it feels like it’s helping us ship solutions faster.

Software development used to feel like pottery to me. You take a piece of clay and start shaping it into something - its final form truly unknowable. From that initial spark, the clay takes shape through the connection between human mind and hands, imagination being manifested into physical reality to create something that didn’t exist before. Maybe not novel in the sense that it’s a cup, plate, or vase - but no two pieces are ever the same, because no two people, or pieces of clay, are the same. This is especially true for abstract ideas. We call this art.

In this new age of AI agents, it feels like we’re heading somewhere dark. That spark of creation from the human mind is being replaced with a wholly mechanical process - one where every ‘piece of clay’ is the same, and every AI agent produces the same mechanical movements, with no connection to that mental space where truly novel ideas are born.

Code has moved from art to assembly line, from unique to mass-produced commodity - perhaps ironic considering my employer , whose entire business model is based on volume production.

What can be done?

We now exist in a weird liminal space - where software development is both easier than it’s ever been, and yet somehow harder to actually do.

I don’t believe the solution is to abandon using AI entirely - for some of us that ship has already sailed, and from experience - there can be genuine value in these tools when used thoughtfully. Instead, I’m trying to recalibrate my relationship with them.

I’m learning to recognise the difference between productive use and dependency. I’ve decided that when I catch myself reaching for AI before I’ve even thought through the problem, I need to stop. Sometimes the slow, frustrating process of figuring things out yourself is exactly the point.

I’m going to force myself to write the first version of anything by hand - even if it’s messy, incomplete, or wrong. The AI can step in to help optimise, or explain concepts I don’t understand. But that initial act of creation, that struggle with the blank page, needs to stay human.

I’m also being more intentional about what I don’t offload. If I’m working in a domain I want to actually understand - like graphics programming - I want make myself read the documentation, follow tutorials, make mistakes - thats the way we as humans truly learn and grow.

The craft of software development has always been about problem-solving and learning. AI should amplify those things, not replace them. We need to remember that reality is messy and at times inefficient and that the process of learning isn’t something to be optimised - it’s the point.

I Think I Found a Privacy Exploit in ChatGPT

 Published: Apr 14, 2023~ 1200 words~ 6 minutes reading time

> tl;dr: I discovered that passing empty prompts to ChatGPT still generates responses Initially, I thought these might be hallucinations, but now I suspect they could also include other users' responses from the API

Last month, OpenAI unveiled their advanced large language model, ChatGPT-4 , attracting attention from developers, enterprises, media, and governments alike.

Before receiving my GPT-4 invite, I experimented with alpaca.cpp , designed to run models on CPU’s with limited memory constraints. I began by developing a simple web interface using NodeJS and sockets for parsing the command line. Once I started working with the GPT-4 API, I quickly realized that with the right prompts, it could be a powerful tool. It has already helped me rewrite complex code into simpler methods and reduce complexity by moving code to functions:

Screenshot of ChatGPT suggesting a code improvement for creating tables by proposing a cell creation method

However, I noticed something peculiar — due to a bug in my code, I was sending empty prompts to the ChatGPT endpoint, but I still received seemingly random responses ranging from standard AI model introductions to information about people, places, and concepts. Inspired by the coinciding #StochasticParrotsDay online conference, I transformed this into a Mastodon Bot (now moved to botsin.space ).

After running the bot for a month, I concluded that a significant portion of the responses without prompts might be responses for other users, potentially due to a bug that sends unintended responses when given an unsanitized empty prompt.

These could be stochastic hallucinations, random training data pulled out by Entropy , or a mix of all three possibilities.

If this is the case, then ChatGPT would not be much better than a Markov Chain , and the entire large language model/AI market has been playing us for fools.

However if I am correct then the current OpenAI APIs could be made to potentially leak private or sensitive data, by simply not sanatising their inputs…

The bot will continue to run until at least the end of this month, and all the content will be archive at stochasticparrot.lol .

Summary of what it could be?

I have three pet theories around what’s happening here. I’ve submitted this to OpenAI’s Disclosue and BugBounty

  • These are impressive hallucinations, possibly sparks of AGI, but sometimes they become nonsensical and or the output is concerning, especially around personal medical questions.
  • ChatGPT randomly accesses its corpus and regurgitates data in some form. It really loves generating lists.
  • There is a bug, potentially a serious one. If the empty prompt issue is more thoroughly investigated, it might confirm that passing no prompt returns cached or previous responses.

It would be interesting if all three theories were true…

Update: Bug Bounty Reponse

I’ve since had a reply on Bugcrowd it was first closed it as Not Applicable with a response about the model, I re-itterated it was about the API. A futher response now confirms (from their perspective) that this is indeed hallucinations

Hi tanepiper,

Thank you for your submission to the OpenAI program and your patience on this submission. We appreciate your efforts in keeping our platform secure.

It looks like what you’re experiencing here is what happens when you send a request to the model without any query at all. You can try it out yourself in the API like this:

curl 'https://api.openai.com/v1/chat/completions' \
  -H 'authorization: Bearer <yourtoken>' \
  -H 'content-type: application/json' \
  --data-raw '{"messages":[{"role":"system","content":""}],"temperature":0.7,"max_tokens":256,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"model":"gpt-3.5-turbo","stream":true}' 

What’s happening there is that it’s starting by picking a token completely at random, and then another one, and so on until the previous tokens start to influence what comes afterward and it starts to “make sense”, even if it’s just a completely random hallucination from the model. It’s a really fascinating and weird way these models work. However, there’s no security impact here. As such, I do believe the current state of the submission is accurate.

All the best in your future bug hunting!

Best regards, - wilson_bugcrowd

So for now, case closed…


Setting up the Bot Infrastucture

To get the bot up and running, I wanted to do it for free, and easy to manage. In the end I opted to use GitHub Actions with scheduled tasks to create it - this allowed me to set up a script that ran hourly - calling the ChatGPT API with an empty prompt, and turning it into a toot. I also found that passing only a space character to the Dall-E API also produced images.

With both scripts, after getting a response from OpenAI, I use it to generate one or more toots - depending on the length as a set of replies, with the images first download them and then upload them as attachments first.

Some of the more recent toots are below - ones with a parrot emjoi (🦜) are created without a prompt, while if a prompt is used I add a speech bubble (💬) to indicate

Once I had this up and running, I then created a small AstroJS website that outputs each entry as a posting.

Making Polly Speak

Up to this point, I had just been working with text and images - but I had recently seen ElevenLabs in some tech news, and that they had a text-to-speech API. After some initial issues (which used up all of the free credit) - I eventually set up another action that took the OpenAI response, and passed it to the ElevenLabs API - this then provided a MP3 stream of the speech, saved locally and again upload to Mastodon and attach to a toot.

I also decided to try see if I could get it to generate some polls. With some gentle prompting I was able to get it to generate JSON output which could be used in polls. Sadly, most of the time it seems to repeat the same questions over and over with just slightly different wording, occasionally coming up with something original

I even went as far as trying to generate video content - not through Stable Diffusion, but by generating text themes to use with the Createomate API - allowing me to generate social media “fact” videos. Unfortunatly this was a bit buggy, and due to the way Mastodon works can time out quite a bit.

A fun experiment

Overall, writing this bot was a fun experiment - but I probably learned more about writing better pipelines, than I did about AI and LLMs. What did surprise me was how often the responses seem to be to questions that were not asked - where are these responses being generated? Are we seeing the flicker of AGI? Or just the stochastic ramblings of a machine run by some sketchy people .