• AI, Software Development, and Centralization

    Two posts on AI that caught my attention recently:

    Don’t fall into the anti-AI hype -

    I love writing software, line by line. It could be said that my career was a continuous effort to create software well written, minimal, where the human touch was the fundamental feature. I also hope for a society where the last are not forgotten. Moreover, I don’t want AI to economically succeed, I don’t care if the current economic system is subverted (I could be very happy, honestly, if it goes in the direction of a massive redistribution of wealth). But, I would not respect myself and my intelligence if my idea of software and society would impair my vision: facts are facts, and AI is going to change programming forever.

    Antirez is the creator of Redis, which is well respected cache software that I’ve used professionally. Writing this kind of software requires extra levels of care and thoughtfulness to maintain performance and reliability goals of the project. So, I find it interesting that LLMs are very useful to Antirez in this codebase.

    I do think that LLMs will be a standard tool for programming going forward. But that’s probably been obvious for folks working in the field for a while now.

    What caught my attention more is his views on the ecosystem around LLMs:

    However, this technology is far too important to be in the hands of a few companies. For now, you can do the pre-training better or not, you can do reinforcement learning in a much more effective way than others, but the open models, especially the ones produced in China, continue to compete (even if they are behind) with frontier models of closed labs. There is a sufficient democratization of AI, so far, even if imperfect. But: it is absolutely not obvious that it will be like that forever. I’m scared about the centralization. At the same time, I believe neural networks, at scale, are simply able to do incredible things, and that there is not enough “magic” inside current frontier AI for the other labs and teams not to catch up (otherwise it would be very hard to explain, for instance, why OpenAI, Anthropic and Google are so near in their results, for years now).

    This aligns well with how I feel about LLMs. I think anyone who cares about computing and the impacts it has on society has a vested interest in seeing that AI does not become centralized. And I agree that there’s too much practical use for LLMs in coding for things to totally evaporate in a cloud of hype. (Don’t take this to mean there isn’t too much hype around LLMs and a ton of worthless AI slop. I think the industry as a whole will see the bubble burst, but, like previous tech bubbles, we will some aspect of the technology make it through.)

    Birchtree blogged about using LLMs to quickly build personalized software:

    LLMs have made simple software trivial

    I was out for a run today and I had an idea for an app. I busted out my own app, Quick Notes, and dictated what I wanted this app to do in detail. When I got home, I created a new project in Xcode, I committed it to GitHub, and then I gave Claude Code on the web those dictated notes and asked it to build that app.

    About two minutes later, it was done…and it had a build error. 😅

    What’s happening here, I think, will quickly be a major shift in the industry and it’s one that I’m excited about: software development will become increasingly decentralized and personal. For ages, we’ve all had incredibly powerful computers (and phones!) accessible to us on a daily basis. Yet, our dominant computing experiences have become increasingly centralized. Just think how much of our digital lives runs in the cloud rather than on our personal machines. Now, don’t get me wrong, I love the web. But what I love about it is how it is open and accessible to everyone as both a reader and a writer. However, today the web is dominated by a handful of companies (Google, namely). And it’s the same situation for our personal devices: Apple, Google, and Microsoft retain so much power over what’s allowed to run on their operating systems.

    But, I think there’s a path into the future where LLMs help chip away at the centralization we are currently experiencing. Apple and others will lose their positions as gatekeepers and toll extractors if people with no development skills are able to use LLMs to build whatever idea they have into functioning software that runs on their personal devices. I think the trick here is to ensure we don’t just replace Apple, Google, and Microsoft with OpenAI, Anthropic, and, well, Google again.

    To that end, it’s become a hobby of mine to experiment with opensource LLMs and the ecosystem around them on my Framework Desktop which can easily run large models on it’s shared memory architecture. I hope to write more about what I’ve been experimenting with here in more detail. But for now, here’s a few pointers if you are interested in this space:

    • clawd.bot: Think open source Siri with much more capability and integrations. I’ve only just started playing with this, but it’s super interesting.
    • opencode: A Claude Code like CLI tool that can utilize LLMs running in the cloud as well as models you are running locally, which is how I’ve been using it.
  • Image

    Currently reading: Hole in the Sky by Daniel H. Wilson 📚

  • Image

    Finished reading: Moby-Dick by Herman Melville 📚

    Epic! I read this with a book club, and I’m very glad this was the club’s pick because I’m not sure I would have read it otherwise. Moby Dick was very different than what I expected, more enjoyable and more relevant than my expectations.

  • pluralistic.net/2026/01/01/39c3

    Which is why I’ve come to Hamburg today. Because, after decades of throwing myself against a locked door, the door that leads to a new, good internet, one that delivers both the technological self-determination of the old, good internet, and the ease of use of Web 2.0 that let our normie friends join the party, that door has been unlocked.

    I’m a fan of Cory Doctorow and the EFF, I think they argue for the right side of most issues. If you care about open technology (and I’d argue everyone should) this post is worth a read. It describes an opportunity for the world to loosen the grip that American tech companies have over technology and the way it’s used.

    I’ve been aware of America’s DMCA law, and it’s flaws, since the law’s inception. What I didn’t realize is that America has coerced most of the world to pass similar laws locally. Give this post a read for why that’s important and why there’s opportunity now to unwind the effects of these laws.

  • www.joanwestenberg.com/the-case-for-blogging-in-the-ruins

    When people talk about the Enlightenment as if it were an intellectual garden party where everyone sipped wine and agreed about reason, they’re missing the part where producing and distributing ideas was (in fact) dangerous and thankless work.

    Diderot’s project was fundamentally about building infrastructure for thinking. He wanted to create a shared repository of human knowledge that anyone could access, organized in a way that invited exploration and cross-referencing. He believed that structuring information properly could change how people thought.

    He was right.

    Couldn’t agree more with Joan’s whole post. Start a blog! Joan links to some good platforms.

    Or at least give RSS readers another try!

  • Year in books for 2025

    Here are the books I finished reading in 2025.

    NeuromancerSky DaddyThe Buffalo Hunter HunterElectrifyI, RobotEarthlightIn Defence of FoodMr. Penumbra's 24-Hour BookstoreSourdoughOld Man's WarMoonboundParable of the TalentsParable of the SowerOrbital
  • Image

    Finished reading: Neuromancer by William Gibson 📚

    The cyberpunk classic. I’ve read it before but it’s been a long time and I had forgotten the plot. It’s such a great book and incredible that it defined a genre.

  • jwz: The original Mozilla “Dinosaur” logo artwork

    It has come to my attention that the artwork for the original mozilla.org “dinosaur” logo is not widely available online. So, here it is.

  • Perl’s decline was cultural

    There’s been a flurry of discussion on Hacker News and other tech forums about what killed Perl. I wrote a lot of Perl in the mid 90s and subsequently worked on some of the most trafficked sites on the web in mod_perl in the early 2000s, so I have some thoughts. My take: it was mostly baked into the culture. Perl grew amongst a reactionary community with conservative values, which prevented it from evolving into a mature general purpose language ecosystem. Everything else filled the gap.

  • Hank Green And The Fantastical Tales of God AIs

    Savannah, Georgia—In the old lacquered coffee shop on the corner of Chippewa Square, I eat a blueberry scone the size of a young child’s head and sip cold black coffee while staring incredulously at my phone. I’m watching Hank Green interview Nate Soares, co-author of the new book If Anyone Builds It, Everyone Dies, and I am in utter disbelief at the conversation taking place before my eyes. Hank Green, the internet’s favorite rational science nerd, does not appear to be approaching this interview with any critical lens at all. Instead, he seems to be outright gushing over Soares, an AI-doomerist who’s made it impossible to know where his message ends and big tech’s lobbying begins. Let me explain…

    Pretty good dive into part of why I don’t trust the companies producing the frontier AI models.

    I also don’t buy AI doomerism. I agree it’s a FUD and regulatory capture tactic while also distracting from the real issues associated with AI.

    There’s no putting LLMs back in the bag, so rather than anoint a few companies monopolies over the technology, I’d really rather see a continued open source ecosystem around it, which, I think, will help us find beneficial ways to apply this technology for individuals rather than keeping all the wealth and power concentrated in a few companies.

  • Image

    Finished reading: Sky Daddy by Kate Folk 📚

    This book is funny and wild! It starts with an epigraph to Moby Dick, which is fitting since I’ve been reading that with my book club.

  • Was not expecting to find some hope and inspiration on the topics of climate change and politics from Al Gore on the Zero podcast:

  • Some photos from the nearby Doll’s Head Trail:

    Auto-generated description: A tranquil autumn landscape features a marshy area with patches of grass, scattered trees with fall foliage, and a cloudy sky overhead.

    Auto-generated description: A serene autumn landscape features a reflective pond surrounded by trees with colorful foliage under a cloudy sky.

    Auto-generated description: A serene pond reflects a cloudy sky, surrounded by trees with autumn foliage.

    Auto-generated description: A torso of a doll with blue earmuffs is mounted on a metal spring in a forest setting.

    Auto-generated description: A doll’s head is mounted on a rusted wheel attached to a tree stump, with a wooden disc featuring drawings and notes hanging below, set against a forested background.

    Auto-generated description: A yellow container with a rocket drawing and FLY ME TO THE MOON text is attached to a tree branch alongside a toy dinosaur.

    Auto-generated description: A small wooden bridge leads to a narrow path through a dense, wooded area with scattered leaves.

    Auto-generated description: A winding pathway covered with scattered leaves passes through a forested area with trees and greenery.

  • My current thoughts on LLMs

    This is a longer post, here’s the main ideas:

    • I don’t trust OpenAI, Anthropic, Google, Facebook, Apple or any of the companies producing the frontier models
    • We need to consider people’s and the environment’s needs over companies and AI
    • LLMs as a technology are useful, and probably here to stay
    • LLMs are like digital librarians: they can help people navigate through the huge swaths of information we all have access to and deal with on a daily basis

    Trust

    I don’t trust OpenAI, Anthropic, Google, Facebook, Apple* or any of the companies producing the frontier models.

    *Apple: They really aren’t on the frontier, but they are one of the few companies prioritizing models that are private and run on your devices. But, they also want to own a proprietary platform and lock users in, generating more service revenue. So they are a mixed bag and it’s not clear cut. Still, I don’t think we should entirely trust their motives and treat them like the rest of the companies on this list.

    The mistrust is in these entities as developers and operators of LLMs. The LLMs themselves, I think you need to evaluate trust in a different way. Currently, the LLMs themselves seem to be accurate enough that they are not intentionally misleading you. But, the corporate entities behind them, I don’t think you can count on them to be trustworthy.

    There’s so much money pouring into this industry with the expectation of seeing a return on those investments. I think the bubble here will eventually burst and any companies left standing will need to start to show a profit. If they can’t burn through investor cash, where is that money going to come from? The users.

    It’s not that I’m against paying for a product I find useful (in fact, I prefer that business model over most others). It’s that the gap between what people are currently paying and what has been spent to develop and run these models is enormous. I think there will be pressure to extract as much money as possible from the users and businesses built on top these LLMs. This could easily play out like it did with Microsoft in the 90s, with monopolistic business practices. At best, enshittification is the outcome.

    So what do we do? Well, there’s not a great answer that I can see, but I think we should be skeptical of how AI gets woven into our lives and businesses. I think it’d be best if LLMs become an open commodity where it is easy to develop and run models on accessible hardware. I think avoid tying yourself and your business to the proprietary platforms offered by these companies. Don’t build ChatGPT apps, for example.

    Regulation may be an option here, but that is a double edged sword for sure. I think our best bet is to proactively move toward open ecosystems and platforms and avoid the proprietary.

    The environmental cost

    All of the companies at the frontier are in a giant race to increase compute capacity, resulting in building new, larger, and more power hungry data centers.

    Training and, to a lesser extent, hosting these models is incredibly resource intensive. And, of course, all that electric gets converted into heat, so keeping these data centers cool often consumes large amounts of water.

    This is a big problem that needs to be curbed.

    When these data centers are being constructed it’s not uncommon for the local power utility company to give the data center an incredibly low rate for the electricity they consume. Then the cost of the utility expanding their supply and upgrading the grid often gets pushed onto all the other consumers. I think utility regulators should make it illegal for utilities to give these kind of deals to AI data centers. Instead, the companies building and operating these data centers should bear the cost of the grid and supply upgrades that are needed. They should also require zero carbon, renewable power sources. Let the companies with billions to spend help push along the electric industry toward zero carbon goals.

    Even worse is when AI data centers forgo the local utility and generate their own power using natural gas or some other carbon emitting source. This should also be stopped.

    On the topic of water consumption, I think local municipalities also need to make sure they put safe limits and regulations around water use for this purpose. I think the best policy is to forbid data center construction near population centers and within city limits. Save the land and water for people.

    The environmental and human costs of building out AI infrastructure must be considered while this industry is booming. We should not compromise on this.

    LLMs as a Technology

    While the economics and levels of investment around AI/LLMs are, I believe, creating a bubble, I think the technology itself, the LLM, is useful and here to stay. Even if today’s LLMs don’t become significantly more powerful or ever achieve AGI (however we define that), I think the current capabilities of these models are incredibly useful.

    But, the technology, like all technologies, has its limits. I don’t think AI should replace humans in any workflow or creative endeavor. I think this is, at best, foolish and, at worse, dangerous.

    Instead, I think the current models are best at augmenting humans in their endeavors.

    There’s so much data on the internet and even within just a single company/organization, it’s way too much for any human or even a group of humans to keep up with. So, I think there’s a huge benefit to use LLMs like a digital librarians to help us navigate through it all. It’s like the computer in Star Trek: you talk to it in natural language, it goes off and searches through mountains of data and comes back with the information you need plus pointers on where to go for more detail.

    I think LLMs can also be used to shield people from the mundane and mindless tasks we all do on a daily basis, freeing us up to focus on the creative and important endeavors. Let the LLM go through my email, filter out the spam, and surface the most important, and actionable messages. This and a million other use cases are all good fits for LLMs.

    Conclusion

    There’s a quote by Brendon Bigley on a recent episode of First, Last, Everything:

    “…that’s not inherently a terrible idea. It’s just that the worst people alive got their hands on it first…”

    He was talking about crypto and NFTs here, but I think something similar is relevant for LLMs. I think it’s regrettable how this technology is being rolled out, but the technology itself is not a terrible.

    But, unlike crypto, I think LLMs are mainstream and not going away anytime soon, so we ought to find ways that we can leverage this technology and shape our relationship to it to maximize positive outcomes for people over corporations and capitalists.

    I think success looks like people benefitting from and empowered by LLMs, rather than replaced by them. The environment is not sacrificed to power LLMs. We build open, communal systems and platforms rather than proprietary ones to enable a new generation of applications and systems.

    If we focus on this, I think we can avoid dystopia. Well, at least an AI dystopia.

  • Coffeeneuring Ride #3: Switchyards Cabbagetown

    Sure, this is a coworking space but they have coffee and tea. Today I’m having a green tea.

    #Coffeeneuring2025

    Image
  • Coffeeneuring Ride #2: Stereo

    Hip spot that plays records. At night, it’s a bar with DJs. But today I’m drinking a cortadito.

    #Coffeeneuring2025

    ImageImage

  • I made a website with Apple iWeb in 2025

    Apple’s late-2000s WYSIWYG website creator still works, as long as you have an old Mac.

  • Starting off the Coffeeneuring Challenge with a ride to Peoplestown Coffee.

    #Coffeeneuring2025

    Image
  • Cyborgs vs rooms, two visions for the future of computing (Interconnected)

    Personally I’m more interested in room-scale computing and where that goes. Multi-actor and multi-modal. We live in the real world and together with other people, that’s where computing should be too. Computers you can walk into… and walk away from.

    Interesting way to look at the two camps future devices seems to be falling into.

    I also feel more interest in room-scale computing. I like the idea that computing is a place I go, rather than something that is constantly apart of me (and thereby constantly competing for my attention). I like the idea that when I need a break from computing, I can just leave the room.

  • Image

    Finished reading: The Buffalo Hunter Hunter by Stephen Graham Jones 📚

    I really enjoyed this one! Would recommend if you are into spooky historical fiction.