Image

Size (and Units) Really Do Matter

We miss the slide rule. It isn’t so much that we liked getting an inexact answer using a physical moving object. But to successfully use a slide rule, you need to be able to roughly estimate the order of magnitude of your result. The slide rule’s computation of 2.2 divided by 8 is the same as it is for 22/8 or 220/0.08. You have to interpret the answer based on your sense of where the true answer lies. If you’ve ever had some kid at a fast food place enter the wrong numbers into a register and then hand you a ridiculous amount of change, you know what we mean.

Recent press reports highlighted a paper from Nvidia that claimed a data center consuming a gigawatt of power could require half a million tons of copper. If you aren’t an expert on datacenter power distribution and copper, you could take that number at face value. But as [Adam Button] reports, you should probably be suspicious of this number. It is almost certainly a typo. We wouldn’t be surprised if you click on the link and find it fixed, but it caused a big news splash before anyone noticed.

Thought Process

Best estimates of the total copper on the entire planet are about 6.3 billion metric tons. We’ve actually only found a fraction of that and mined even less. Of the 700 million metric tons of copper we actually have in circulation, there is a demand for about 28 million tons a year (some of which is met with recycling, so even less new copper is produced annually).

Simple math tells us that a single data center could, in a year, consume 1.7% of the global copper output. While that could be true, it seems suspicious on its face.

Digging further in, you’ll find the paper mentions 200kg per megawatt. So a gigawatt should be 200,000kg, which is, actually, only 200 metric tons. That’s a far cry from 500,000 tons. We suspect they were rounding up from the 440,000 pounds in 200 metric tons to “up to a half a million pounds,” and then flipped pounds to tons.

Continue reading “Size (and Units) Really Do Matter”

Image

NVIDIA Drops Pascal Support On Linux, Causing Chaos On Arch Linux

It’s no surprise that NVIDIA is gradually dropping support for older videocards, with the Pascal (GTX 10xx) GPUs most recently getting axed. What’s more surprising is the terrible way that this is being handled by certain Linux distributions, with Arch Linux currently a prime example.

On these systems, updating the OS with a Pascal, Maxwell or similarly unsupported GPU will result in the new driver failing to load and thus the user getting kicked back to the CLI to try and sort things back out there. This issue is summarized by [Brodie Robertson] in a recent video.

Here the ‘solution’ is to switch to a legacy option that comes from the Arch User Repository (AUR), which feels somewhat sketchy. Worse is that using this legacy option breaks Steam as it relies on official NVIDIA dependencies, which requires an additional series of hacks to hopefully restore this functionality. Fortunately the Arch Wiki provides a starting point on what to do.

It’s also worth noting that this legacy driver on the AUR is being maintained by [ventureo] of the CachyOS project, whose efforts are the sole reason why these older NVIDIA cards are still supported at all on Linux with the official drivers. While there’s also the Nouveau driver, this is effectively a reverse-engineering project with all of the problems that come with such an effort, even if it may be ‘good enough’ for older GPUs.

Continue reading “NVIDIA Drops Pascal Support On Linux, Causing Chaos On Arch Linux”

Image

Kubernetes Cluster Goes Mobile In Pet Carrier

There’s been a bit of a virtualization revolution going on for the last decade or so, where tools like Docker and LXC have made it possible to quickly deploy server applications without worrying much about dependency issues. Of course as these tools got adopted we needed more tools to scale them easily. Enter Kubernetes, a container orchestration platform that normally herds fleets of microservices in sprawling cloud architectures, but it turns out it’s perfectly happy running on a tiny computer stuffed in a cat carrier.

This was a build for the recent Kubecon in Atlanta, and the project’s creator [Justin] wanted it to have an AI angle to it since the core compute in the backpack is an NVIDIA DGX Spark. When someone scans the QR code, the backpack takes a picture and then runs it through a two-node cluster on the Spark running a local AI model that stylizes the picture and sends it back to the user. Only the AI workload runs on the Spark; [Justin] also is using a LattePanda to handle most of everything else rather than host everything on the Spark.

To get power for the mobile cluster [Justin] is using a small power bank, and with that it gets around three hours of use before it needs to be recharged. Originally it was planned to work on the WiFi at the conference as well but this was unreliable and he switched to using a USB tether to his phone. It was a big hit with the conference goers though, with people using it around every ten minutes while he had it on his back. Of course you don’t need a fancy NVIDIA product to run a portable kubernetes cluster. You can always use a few old phones to run one as well.

Continue reading “Kubernetes Cluster Goes Mobile In Pet Carrier”

Image

This Week In Security: Perplexity V Cloudflare, GreedyBear, And HashiCorp

The Internet is fighting over whether robots.txt applies to AI agents. It all started when Cloudflare published a blog post, detailing what the company was seeing from Perplexity crawlers. Of course, automated web crawling is part of how the modern Internet works, and almost immediately after the first web crawler was written, one managed to DoS (Denial of Service) a web site back in 1994. And the robots.txt file was first designed.

Make no mistake, robots.txt on its own is nothing more than a polite request for someone else on the Internet to not index your site. The more aggressive approach is to add rules to a Web Application Firewall (WAF) that detects and blocks a web crawler based on the user-agent string and source IP address. Cloudflare makes the case that Perplexity is not only intentionally ignoring robots.txt, but also actively disguising their webcrawling traffic by using IP addresses outside their normal range for these requests.

This isn’t the first time Perplexity has landed in hot water over their web scraping, AI learning endeavors. But Perplexity has published a blog post, explaining that this is different!

And there’s genuinely an interesting argument to be made,that robots.txt is aimed at indexing and AI training traffic, and that agentic AI requests are a different category. Put simply, perplexity bots ignore robots.txt when a live user asks them to. Is that bad behavior, or what we should expect? This question will have to be settled as AI agents become more common.

Continue reading “This Week In Security: Perplexity V Cloudflare, GreedyBear, And HashiCorp”

Image

This Week In Security: Spilling Tea, Rooting AIs, And Accusing Of Backdoors

The Tea app has had a rough week. It’s not an unfamiliar story: Unsecured Firebase databases were left exposed to the Internet without any authentication. What makes this story particularly troubling is the nature of the app, and the resulting data that was spilled.

Tea is a “dating safety” application strictly for women. To enforce this, creating an account requires an ID verification process where prospective users share their government issued photo IDs with the platform. And that brings us to the first Firebase leak. 59 GB of photo IDs and other photos for a large subset of users. This was not the only problem.

There was a second database discovered, and this one contains private messages between users. As one might imagine, given the topic matter of the app, many of these DMs contain sensitive details. This may not have been an unsecured Firebase database, but a separate problem where any API key could access any DM from any user.

This is the sort of security failing that is difficult for a company to recover from. And while it should be a lesson to users, not to trust their sensitive messages to closed-source apps with questionable security guarantees, history suggests that few will learn the lesson, and we’ll be covering yet another train-wreck of similar magnitude in another few months.

Continue reading “This Week In Security: Spilling Tea, Rooting AIs, And Accusing Of Backdoors”

Hackaday Links Column Banner

Hackaday Links: June 8, 2025

When purchasing high-end gear, it’s not uncommon for manufacturers to include a little swag in the box. It makes the customer feel a bit better about the amount of money that just left their wallet, and it’s a great way for the manufacturer to build some brand loyalty and perhaps even get their logo out into the public. What’s not expected, though, is for the swag to be the only thing in the box. That’s what a Redditor reported after a recent purchase of an Nvidia GeForce RTX 5090, a GPU that lists for $1,999 but is so in-demand that it’s unobtainium at anything south of $2,600. When the factory-sealed box was opened, the Redditor found it stuffed with two cheap backpacks instead of the card. To add insult to injury, the bags didn’t even sport an Nvidia logo.

The purchase was made at a Micro Center in Santa Clara, California, and an investigation by the store revealed 31 other cards had been similarly tampered with, although no word on what they contained in lieu of the intended hardware. The fact that the boxes were apparently sealed at the factory with authentic anti-tamper tape seems to suggest the substitutions happened very high in the supply chain, possibly even at the end of the assembly line. It’s a little hard to imagine how a factory worker was able to smuggle 32 high-end graphics cards out of the building, so maybe the crime occurred lower down in the supply chain by someone with access to factory seals. Either way, the thief or thieves ended up with almost $100,000 worth of hardware, and with that kind of incentive, this kind of thing will likely happen again. Keep your wits about you when you make a purchase like this.

Continue reading “Hackaday Links: June 8, 2025”

Image

Brazilian Modders Upgrade NVidia Geforce GTX 970 To 8 GB Of VRAM

Although NVidia’s current disastrous RTX 50-series is getting all the attention right now, this wasn’t the first misstep by NVidia. Back in 2014 when NVidia released the GTX 970 users were quickly dismayed to find that their ‘4 GB VRAM’ GPU had actually just 3.5 GB, with the remaining 512 MB being used in a much slower way at just 1/7th of the normal speed. Back then NVidia was subject to a $30/card settlement with disgruntled customers, but there’s a way to at least partially fix these GPUs, as demonstrated by a group of Brazilian modders (original video with horrid English auto-dub).

The mod itself is quite straightforward, with the original 512 MB, 7 Gbps GDDR5 memory modules replaced with 1 GB, 8 Gbps chips and adding a resistor on the PCB to make the GPU recognize the higher density VRAM ICs. Although this doesn’t fix the fundamental split VRAM issue of the ASIC, it does give it access to 7 GB of faster, higher-density VRAM. In benchmarks performance was massively increased, with Unigine Superposition showing nearly a doubling in the score.

In addition to giving this GTX 970 a new lease on life, it also shows just how important having more VRAM on a GPU is, which is ironic in this era where somehow GPU manufacturers deem 8 GB of VRAM to be acceptable in 2025.

Continue reading “Brazilian Modders Upgrade NVidia Geforce GTX 970 To 8 GB Of VRAM”