Image

In Which I Vibe-Code A Personal Library System

When I was a kid, I was interested in a number of professions that are now either outdated, or have changed completely. One of those dreams involved checking out books and things to patrons, and it was focused primarily on pulling out the little card and adding a date-due stamp.

Of course, if you’ve been to a library in the last 20 years, you know that most of them don’t work that way anymore. Either the librarian scans special barcodes, or you check materials out yourself simply by placing them just so, one at a time. Either way, you end up with a printed receipt with all the materials listed, or an email. I ask you, what’s the fun in that? At least with the old way, you’d usually get a bookmark for each book by way of the due date card.

As I got older and spent the better part of two decades in a job that I didn’t exactly vibe with, I seriously considered becoming a programmer. I took Java, Android, and UNIX classes at the local junior college, met my now-husband, and eventually decided I didn’t have the guts to actually solve problems with computers. And, unlike my husband, I have very little imagination when it comes to making them do things.

Fast forward to last weekend, the one before Thanksgiving here in the US. I had tossed around the idea of making a personal library system just for funsies a day or so before, and I brought it up again. My husband was like, do you want to make it tonight using ChatGPT? And I was like, sure — not knowing what I was getting into except for the driver’s seat, excited for the destination.

Continue reading “In Which I Vibe-Code A Personal Library System”

Image

Kubernetes Cluster Goes Mobile In Pet Carrier

There’s been a bit of a virtualization revolution going on for the last decade or so, where tools like Docker and LXC have made it possible to quickly deploy server applications without worrying much about dependency issues. Of course as these tools got adopted we needed more tools to scale them easily. Enter Kubernetes, a container orchestration platform that normally herds fleets of microservices in sprawling cloud architectures, but it turns out it’s perfectly happy running on a tiny computer stuffed in a cat carrier.

This was a build for the recent Kubecon in Atlanta, and the project’s creator [Justin] wanted it to have an AI angle to it since the core compute in the backpack is an NVIDIA DGX Spark. When someone scans the QR code, the backpack takes a picture and then runs it through a two-node cluster on the Spark running a local AI model that stylizes the picture and sends it back to the user. Only the AI workload runs on the Spark; [Justin] also is using a LattePanda to handle most of everything else rather than host everything on the Spark.

To get power for the mobile cluster [Justin] is using a small power bank, and with that it gets around three hours of use before it needs to be recharged. Originally it was planned to work on the WiFi at the conference as well but this was unreliable and he switched to using a USB tether to his phone. It was a big hit with the conference goers though, with people using it around every ten minutes while he had it on his back. Of course you don’t need a fancy NVIDIA product to run a portable kubernetes cluster. You can always use a few old phones to run one as well.

Continue reading “Kubernetes Cluster Goes Mobile In Pet Carrier”

Image

An AI By Any Other Name

While there are many AI programs these days, they don’t all work in the same way. Most large language model “chatbots” generate text by taking input tokens and predicting the next token of the sequence. However, image generators like Stable Diffusion use a different approach. The method is, unsurprisingly, called diffusion. How does it work? [Nathan Barry] wants to show you, using a tiny demo called tiny-diffusion you can try yourself. It generates — sort of — Shakespeare.

For Stable Diffusion, training begins with an image and an associated prompt. Then the training system repeatedly adds noise and learns how the image degenerates step-by-step to noise. At generation time, the model starts with noise and reverses the process, and an image comes out. This is a bit simplified, but since something like Stable Diffusion deals with millions of pixels and huge data sets, it can be hard to train and visualize its operation.

The beauty of tiny-diffusion is that it works on characters, so you can actually see what the denoising process is doing. It is small enough to run locally, if you consider 10.7 million parameters small. It is pretrained on Tiny Shakespeare, so what comes out is somewhat Shakespearean.

Continue reading “An AI By Any Other Name”

A photo of the LEGO sorter

Making A Machine To Sort One Million Pounds Of LEGO

You know what’s not fun? Sorting LEGO. You know what is fun? Making a machine to sort LEGO! That’s what [LegoSpencer] did, and you can watch the machine do its thing in the video below.

[Spencer] runs us through the process: first, quit your day job so you can get a job playing with LEGO; then research what previous work has been done in this area (plenty, it turns out); and then commit to making your own version both reproducible and extensible.

A sorting machine needs three main features: a feeder to dispense one piece at a time, a classifier to decide the type of piece, and a distributor to route the piece to a bin. Of course, the devil is in the details.

Continue reading “Making A Machine To Sort One Million Pounds Of LEGO”

A circuit board in the shape of a business card is shown. The circuitry is confined to the left side of the board, and the rest is used for text.

(Neural) Networking With A Business Card

A PCB business card is a great way for electrical engineers to impress employers with their design skills, but the software they run can be just as impressive as the card itself. As a programmer with an interest in embedded machine learning, [Dave McKinnon] wanted a card that showcased his skills, so he designed one that runs voice recognition.

[Dave] specifically wanted to run a neural network on his card, but needed to make it small enough to run on a microcontroller. Voice recognition looked like a good fit for this, since audio can be represented with relatively little data, a microphone is cheap and easy to add to a circuit board, and there was already an example of someone running such a voice recognition network on an Arduino. To fit the neural network into 46 kB, it only distinguishes the words “one” through “nine,” and displays its guess on an LED seven-segment display. [Dave] first prototyped the system with an Arduino, then designed the circuit board around an RP2040.

Continue reading “(Neural) Networking With A Business Card”

Image

“AI, Make Me A Degree Certificate”

One of the fun things about writing for Hackaday is that it takes you to the places where our community hang out. I was in a hackerspace in a university town the other evening, busily chasing my end of month deadline as no doubt were my colleagues at the time too. In there were a couple of others, a member who’s an electronic engineering student at one of the local universities, and one of their friends from the same course. They were working on the hardware side of a group project, a web-connected device which with a team of several other students, and they were creating from sensor to server to screen.

I have a lot of respect for my friend’s engineering abilities, I won’t name them but they’ve done a bunch of really accomplished projects, and some of them have even been featured here by my colleagues. They are already a very competent engineer indeed, and when in time they receive the bit of paper to prove it, they will go far. The other student was immediately apparent as being cut from the same cloth, as people say in hackerspaces, “one of us”.

They were making great progress with the hardware and low-level software while they were there, but I was saddened at their lament over their colleagues. In particular it seemed they had a real problem with vibe coding: they estimated that only a small percentage of their classmates could code by hand as they did, and the result was a lot of impenetrable code that looked good, but often simply didn’t work.

I came away wondering not how AI could be used to generate such poor quality work, but how on earth this could be viewed as acceptable in a university.
Continue reading ““AI, Make Me A Degree Certificate””

Graph showing accuracy vs model

Why You Shouldn’t Trade Walter Cronkite For An LLM

Has anyone noticed that news stories have gotten shorter and pithier over the past few decades, sometimes seeming like summaries of what you used to peruse? In spite of that, huge numbers of people are relying on large language model (LLM) “AI” tools to get their news in the form of summaries. According to a study by the BBC and European Broadcasting Union, 47% of people find news summaries helpful. Over a third of Britons say they trust LLM summaries, and they probably ought not to, according to the beeb and co.

It’s a problem we’ve discussed before: as OpenAI researchers themselves admit, hallucinations are unavoidable. This more recent BBC-led study took a microscope to LLM summaries in particular, to find out how often and how badly they were tainted by hallucination.

Not all of those errors were considered a big deal, but in 20% of cases (on average) there were “major issues”–though that’s more-or-less independent of which model was being used. If there’s good news here, it’s that those numbers are better than they were when the beeb last performed this exercise earlier in the year. The whole report is worth reading if you’re a toaster-lover interested in the state of the art. (Especially if you want to see if this human-produced summary works better than an LLM-derived one.) If you’re a luddite, by contrast, you can rest easy that your instincts not to trust clanks remains reasonable… for now.

Either way, for the moment, it might be best to restrict the LLM to game dialog, and leave the news to totally-trustworthy humans who never err.