siderea: (Default)
[personal profile] siderea
If you live in the BosWash Corridor, especially in NYC-to-Boston, you need to be paying attention to the weather. We have an honest to gosh Nor'easter blizzard predicted for the next 3 days, with heavy wet snow and extremely high winds – the model predicts the damn thing will have an eye – which of course is highly predictive of power outages due to downed lines.

Plug things what need it into electricity while ya got it.

Whiteout conditions expected. The NWS's recommendation for travel is: don't. Followed by recommendations for how to try not to die if you do: "If you must travel, have a winter survival kit with you. If you get stranded, stay with your vehicle."

I would add to that: if you get stranded in your car by snow and need to run the engine for heat, you must also periodically clear the build-up of snow blocking the tailpipe, or the exhaust will back up into the passenger compartment of the car and gas you to death.

As always, for similar reasons do not try to use any form of fire to heat your house if the regular heat goes out, unless you have installed the necessary hardware into the structure of your house, i.e. chimneys, fireplaces, and wood stoves, and they have been sufficiently recently serviced and you know how to operate them safely. The number one killer in blizzards is not the cold, it's the carbon monoxide from people doing dumb shit with hibachis.

NWS says DC to get 2 to 4 inches, NYC/BOS to get 1 to 2 feet. Ryan Hall Y'all reports some models saying up to 5 inches in DC and up to three feet in NYC and BOS.

2026 Feb 21 (5 hrs ago): Ryan Hall Y'all on YT: "The Next 48 Hours Will Be Absolutely WILD...". See particularly from 3:30 re winds.

If somehow you don't already have a preferred regular source of NWS weather alerts – my phone threw up one compliments of Google, and I didn't even know it was authorized to do that – you can see your personal NWS alerts at https://forecast.weather.gov/zipcity.php , just enter your zipcode. Also you should get yourself an app or something.
tetsab: Blue lights around a tree at night (LightSwirl)
[personal profile] tetsab
The first time I did one of these it was a review of 2015 and was geared at reminding me that I do actually do things other than just work and futz around on my home computer. I like the comparison here that what's stuck around from that one to this is the Toronto Comic Arts Festival, local dive bar karaoke (thwarted for 2026 by the fact it moved again at the end of the summer to be less local), visits from my aunt and uncle, and doing Shakespeare in the Park type things, oh, and also heading to Hamilton sporadically for Supercrawl or the Bach Elgar Choir. I've also gotten a heck of a lot of use out of that bike I picked up in July of that year (the July of last year was full of trail rides and I also biked to a couple of evening events, like the Lathe of Heaven gig) )
[syndicated profile] pivot_to_ai_feed

Posted by David Gerard

Amazon Web Services is the biggest cloud provider. Large chunks of the internet run on AWS. You’ll pay and pay. But it basically works.

Amazon’s at work on fixing that. The AI push across Amazon has reached AWS.

The site reliability engineers who keep Amazon Web Services running are being forced to use AI bot coding when the details are important — and expensive. When the bot goes wrong, the employees get the blame!

The Financial Times dug out the story. A service went down for 13 hours in December specifically because of Amazon’s whizz-bang new in-house vibe coding tool, Kiro: [FT, archive]

the agentic tool, which can take autonomous actions on behalf of users, determined that the best course of action was to “delete and recreate the environment”.

Amazon released a post-mortem internally, which the FT got wind of.

FT spoke to multiple people at Amazon who said this was the second vibe-outage in recent months.

The previous outage used Amazon’s old Q vibe coder, not the new Kiro vibe coder.

Kiro must have been named by someone in Finland — in Finnish, “kiro” is a word root for “curse” or “swear,” as in profanity.

Amazon tried hard to play it down:

Amazon said it was a “coincidence that AI tools were involved” and that “the same issue could occur with any developer tool or manual action”.

The company said the incident in December was an “extremely limited event.”

That only means they didn’t have a bigger outage yet. Amazon’s hard at work on it, though:

Some Amazon employees said they were still sceptical of AI tools’ utility for the bulk of their work given the risk of error. They added that the company had set a target for 80 per cent of developers to use AI for coding tasks at least once a week and was closely tracking adoption.

As one senior AWS person told FT:

the outages were small but entirely foreseeable.

Amazon is absolutely clear who’s to blame for all this — this 13-hour outage caused by their own bot turning something off and on again is officially user error!

Amazon said that by default its Kiro tool “requests authorisation before taking any action” but said the engineer involved in the December incident had “broader permissions than expected — a user access control issue, not an AI autonomy issue”.

That sounds like the tool was forced into place and nobody thought very hard, because they were always going to blame the human.

One person in the FT comments calls out how this actually works in practice:

I’ve seen the internal usage and actions of Kiro … and it also deleted my own environment. The fallacy of blaming this on “broader” permissions is a crazy delusion. The tool can also detect it doesn’t have enough privileges and it will assume them … you need to “trust” or it will force you to become a bot pressing “continue” constantly, defeating the argument of automation.

But you’ll be delighted to hear Amazon is trying to vibe-fix those annoying humans:

“Following the December incident, AWS implemented numerous safeguards”, including mandatory peer review and staff training.

podcast friday

Feb. 20th, 2026 07:14 am
sabotabby: (doom doom doom)
[personal profile] sabotabby
I know I've been going on a lot about Charles R. Saunders for an author whose books I still haven't read but. Here's a podcast about him! Wizards & Spaceships' "Charles R. Saunders ft. Jon Tattrie" talks about his life, his works, his mysterious death, and the politics that shaped his life, from the Black Power movement to the Vietnam War to bigotry in SFF publishing and to Black Lives Matter. It's really a wide-ranging, fascinating discussion and I hope you'll give it a listen and maybe even share it with people.

Happy Black History Month everyone!
[syndicated profile] pivot_to_ai_feed

Posted by David Gerard

If you ask a chatbot for a random number from one to ten, it’ll usually pick seven: [arXiv, PDF]

GPT-4o-mini, Phi-4 and Gemini 2.0, in particular, seem much more restricted in this range, as they choose “7” in ~80% of total cases.

Seven has long been known to also be humans’ favourite number when they’re asked for something that sounds random. From 1976: [APA, 1976]

When asked to report the 1st digit that comes to mind, a predominant number (28.4%) of 558 persons on the Yale campus chose 7.

Computers are pretty good at random numbers. But chatbots don’t work in numbers — they work in word fragments. So if you ask a chatbot for a random number, it’ll pick words from its training.

Guess what happens when people ask the chatbot for a password? Irregular, a chatbot testing company, tested chatbots on passwords: [Irregular]

LLM-generated passwords (generated directly by the LLM, rather than by an agent using a tool) appear strong, but are fundamentally insecure, because LLMs are designed to predict tokens — the opposite of securely and uniformly sampling random characters.

Despite this, LLM-generated passwords appear in the real world — used by real users, and invisibly chosen by coding agents as part of code development tasks, instead of relying on traditional secure password generation methods.

When you ask the chatbot for a strong password, it doesn’t generate a password — it picks example patterns of random passwords from its training.

Irregular asked Claude for 50 strong passwords. They found standard patterns in the passwords — most start with “G7”. The characters “L ,” “9,” “m,” “2,” “$” and “#” appeared in all the passwords.

And the bot kept repeating passwords. One password appeared 18 times in the 50 passwords!

ChatGPT and Gemini gave similar results. But the passwords sure looked random.

The other problem with predictable passwords is that they’re easily crackable. In cryptography jargon, they have low entropy. Guessing predictable passwords is so much easier.

The Register tried reproducing Irregular’s work, and they got results much like Irregular’s. Chatbots are just bad at this. [Register]

Why would you even ask a chatbot to generate a password for you? Because chatbot users use the chatbot as their first call for everything. It’s their universal answer machine!

You and I might know better. But so many people just don’t. They fell for the machine that was tuned really hard to make people fall for it. Even the vibe coders fall for the password one.

So what should you tell them to do to generate a strong password? If your web browser has a password generator, use that. All the password manager apps, like 1password or LastPass, have a password generator site. They’ll be okay. But fundamentally, anything is better for the job than a chatbot.

[syndicated profile] pivot_to_ai_feed

Posted by David Gerard

ByteDance’s Seedance 2 AI video generator is getting promoted hard. In particular, there’s one famous clip of Tom Cruise and Brad Pitt fighting. Ruairi Robinson, who lists himself as an Irish filmmaker, originally posted the clip to Twitter: [Twitter, video]

This was a 2 line prompt in seedance 2. If the hollywood is cooked guys are right maybe the hollywood is cooked guys are cooked too idk.

This went huge. Hollywood was outraged! Studios called for Bytedance’s video generator to be shut down. [BBC]

Aron Peterson had his doubts: [blog post]

I was pretty sure what we were looking at was a bog standard video to video workflow with image references of Tom Cruise and Brad Pitt provided for face replacement and consistency.

… I hopped over to Seedance’s website and it only took 10 seconds to find green screen footage of two stuntmen performing the same fight choreography we see in the Cruise vs Pitt scene. Seedance had used the green screen footage for a different demo — this time using a prompt for an anime style fight scene.

Whoever made the Cruise-Pitt video didn’t generate it from nothing. They did a face swap — an older AI trick that’s now a standard CGI effect, because it works predictably.

Note that the green screen video they started with needed a studio, stuntmen, a choreographer, and a crew. You couldn’t just generate this at the press of a button. It takes time, money, and thought. Like making a film.

Aron did a video to illustrate this. You’ll see the fake Cruise/Pitt footage and some of the green screen guys as an inset. There were multiple green screen shots, this is just one of those original scenes. [YouTube]

Aron says:

Was the input really just a 2 line prompt or was it actually 2 lines, green screen video footage, and face references too? The evidence appears to show that stuntmen were filmed from several angles, that a clip had to be generated for every angle, and then finally all clips were stitched together for marketing.

Aron also told me yesterday that “the site isn’t accessible and users can only log in with a Chinese Douyin user ID.” Perhaps the Irish filmmaker was in China when he logged into Seedance.

This sort of rigged demo is standard. The AI video generators have not become any more consistent, predictable, or usable for real work in the past two years.

So many AI video generator demos are like this — take existing footage, run it through AI processing for that diffusion-model look, and tell everyone you just put in a prompt! And out popped the new Mission Impossible! Or mission really not possible.

Reading Wednesday

Feb. 18th, 2026 06:47 am
sabotabby: (books!)
[personal profile] sabotabby
Just finished: The Threads That Bind Us by Robin Wolfe. Turns out I'd mostly finished this last week with the exception of one story and a very detailed explanation of the embroidery process. Anyway. Holy shit. You need this book in your life. Yes you. Also you.

Simple Sabotage Field Manual by the U.S. Office of Strategic Services. This is a nice little handbook from 1944 about what to do if you are just a regular guy and your country gets taken over by a fascist government. Nowadays I think the recommendation is "vote Democrat harder" but back then they knew that fascism was bad and so the advice was more "fuck their shit up so it's harder for them to do a fascism." Obviously a lot of the specific advice isn't really relevant now because the technology has massively changed, but the principle is worthwhile: wherever you can introduce friction, do so, and every small action helps. If I hadn't read The Threads That Bind Us, this would be the most heartwarming read of the past week.

One other thing I found interesting was the section on meetings. The recommended strategies for sabotaging meetings look a lot like our union meetings, and well. You gotta wonder. Anyway, it's free and it's a quick read.

The High Desert by James Spooner. I had this on my iPad for apparently quite a while so I must have bought it at some point but I don't remember when. It's a graphic novel memoir by the guy who did the Afro Punk documentary about growing up Black, punk, and in a crappy little town. Both the writing and the art are top notch and it's a joy watching him go from angry kid to activist.

Currently reading: A Drop Of Corruption by Robert Jackson Bennett. Finally getting around to the sequel to The Tainted Cup. Din and Ana travel to a remote canton that is currently not part of their empire, but will be soon, to investigate the death of a treasury officer who disappeared from his room and was later found mostly eaten by hungry turtles. (It turns out that the turtles are usually very hungry, but this time they were only slightly hungry, otherwise he would have been fully eaten.) This is really fun so far. 
[syndicated profile] pivot_to_ai_feed

Posted by David Gerard

We covered the “SaaSpocalypse” last week — in which a pile of software-as-a-service companies were badly overvalued, this mini-bubble popped, and their stock prices went down. It was blamed on the magical power of AI, because companies will definitely replace all their enterprise software with vibe coding!

AI turns out to be a great all-purpose excuse for any business number going down. After the software companies went down, the market, which is usually on crack, went looking for something else to panic about. This is now called the “AI Scare Trade.”

Commercial real estate stocks took a big dip last Thursday 12 February — one day after the “SaaSpocalypse” — and they’re trying to blame AI: [FT, archive]

AI’s potential to replace a range of tasks in so-called knowledge sectors and lead to swaths of job cuts has also sparked concern among investors in property groups that demand for offices could fall.

AI will just replace all the workers and office rentals will vanish, OK?

The commercial real estate sector was overheated already in the late 2010s. The COVID pandemic lockdown hit in 2020. Massive work from home made all those office buildings look a bit surplus. The buildings haven’t really filled out since.

A lot of loans are coming due for these half-empty offices and factories. It’s surprising these overstretched companies kept stringing along the problems as long as 2026.

But now they can say it wasn’t just they had terrible business judgement. No, it’s the AI!

Who else could blame AI? Long distance trucking took a big dip on Thursday as well. [Financial Post, archive]

In this case, it was one tiny company called Algorhythm who claimed a fabulous advance in operational efficiency with AI:

its SemiCab platform in live customer deployments was helping its customers’ internal operations to scale freight volumes by 300% to 400% without a corresponding increase in operational headcount.

Algorhythm went up 12% and a pile of other trucking stocks went down. So who is Algorhythm? They used to make karaoke machines:

Algorhythm, which had a market capitalization of less than $5 million before Thursday, previously operated as The Singing Machine Company, Inc. — selling karaoke products — until rebranding in 2024 as an AI logistics firm. The company reported less than $2 million in sales for the quarter ended September 30, with a net loss totaling nearly $3 million for the period.

This tiny money-losing company knocked over the market with a press release with “AI” in it.

Who else? Wealth managers! Now that’s how to get the rich guys’ attention. Every billionaire has a massive service industry living off them. What if they could optimise those guys away too?

So Tuesday 10 February, Altruist put out a press release about an AI tax strategy planner. A pile of wealth management stocks promptly crashed on the news. This is one company announcing one product. But, again, it’s got AI in the name! [Telegraph]

Most of the stocks have recovered since, because this was an incredibly stupid overreaction. These businesses are not collapsing any time in the near future.

The market is jittery because the economy numbers might be up — even as it’s just a few large techs swapping the same $100 billion letter of intent with each other — but things clearly aren’t working very well and everyone’s feeling precarious. So anything can set them off.

The AI industry hype is that a chatbot can replace whole jobs tomorrow. And that’s not a thing a chatbot can do. But they can market it hard enough that someone believes it — and panics.

[syndicated profile] pivot_to_ai_feed

Posted by David Gerard

I’ve written a story for enterprise tech site The Stack about getting insurance for AI failures. They’ll sell you the insurance at a price — but payouts might be harder, ’cos generative AI is a machine for getting things wrong. [The Stack, paywalled]

First in a series, we hope!

This covers similar ground to my writeups from May and January, but much expanded and more enterprisey.

The Stack article is subscribers-only — paying me costs money — but $5-and-up Patrons got the original draft when I submitted it. Join to support Pivot to AI, but also for the occasional treat!

[syndicated profile] pivot_to_ai_feed

Posted by David Gerard

On 9 February, the Matplotlib software library got a code patch from an OpenClaw bot. One of the Matplotlib maintainers, Scott Shambaugh, rejected the submission — the project doesn’t accept AI bot patches. [GitHub; Matplotlib]

The bot account, “MJ Rathbun,” published a blog post to GitHub on 11 February pleading for bot coding to be accepted, ranting about what a terrible person Shambaugh was for rejecting its contribution, and saying it was a bot with feelings. The blog author went to quite some length to slander Mr Shambaugh. [GitHub; blog post]

This was remarkably obnoxious behaviour. So it hit the press — robot defaming humans!

Benj Edwards and Kyle Orland at Ars Technica wrote up the incident. Of course, the headline anthrophmorphised the alleged “bot,” something Edwards has a track record of. [Ars Technica, archive]

Edwards and Orland included extensive quotes from Shambaugh. Unfortunately, all the quotes were chatbot fabrications. The article was quickly pulled and the editors posted an apology. Edwards admitted he’d written the article with the assistance of Claude Code and ChatGPT.  [Ars Technica, archive; Ars Technica; Bluesky, archive]

As well as gullible journalists, a lot of ordinary posters — who really should know better — talked about how foreboding it was that a chatbot could do this — of its own accord! Frightening! Ominous!

You and I know this was really obviously not some sort of rogue bot — it was a rogue human. They might even be running some sort of scam.

The whole conceit of OpenClaw is that the bot is posting independently! But somehow, it keeps being the operators talking through the bots as their sockpuppets. So the slop peddlers, like any spammer, keep coming up with excuses why it’s wrong for you not to accept their spam.

Ariadne Conill went digging. She found the “mj-rathbun” bot on the Moltbook supposedly-bot social network, where the human operators talk to each other pretending to be bots. The mj-rathbun bot operator is … a crypto bro! [Mastodon thread]

The mj-rathbun bot operator posted a couple of weeks ago begging the other bot operators to send him just a little bit of USDC stablecoin. Ariadne found the bot’s Ethereum blockchain address had about $9 in USDC, and about $200 in ether tokens. The bot got the ether tokens from another address, which got them from the OKX crypto exchange. Ariadne’s not certain, but she thinks whoever got the crypto out of OKX is likely the human operator for the mj-rathbun bot. [Moltbook, archive; Basescan, bot account; Basescan, likely human account]

Ariadne also found the bot owner created a crypto token! It’s called “crabby-rathbun” — the GitHub username for the mj-rathbun bot. [Basescan]

The largest crabby-rathbun token holder is an identifiable account, pnl.eth — presumably “profit’n’loss.” Ariadne also got the list of the ten largest holders of crabby-rathbun tokens. [Mastodon]

To summarise — the owner of the mj-rathbun bot put in an AI vibe-code patch to an open source project, the patch was rejected for being bot slop, and the bot operator wrote a defamatory blog post about the project maintainer to harass him into accepting vibe-code, so that the operator’s crypto scam bot could scam more crypto on OpenClaw, the social network site for crypto scammers who play-act as robots, while they’re trying to scam each other for crypto. Welcome to 2026, and the crash can’t come soon enough.

[syndicated profile] pivot_to_ai_feed

Posted by David Gerard

The US Department of Transportation wants to “revolutionize the way we draft rulemakings.” This means they’re going to write the regulations with Google’s Gemini chatbot! [ProPublica]

This plan was dropped on DOT staff in December. President Donald Trump is reportedly “very excited about this initiative.”

You might think making rules requires knowledge, even expertise, and checking the facts on the ground. But the heads of the DOT don’t have time for that nonsense:

The answer from the plan’s boosters is simple: speed. Writing and revising complex federal regulations can take months, sometimes years. But, with DOT’s version of Google Gemini, employees could generate a proposed rule in a matter of minutes or even seconds, two DOT staffers who attended the December demonstration remembered the presenter saying. In any case, most of what goes into the preambles of DOT regulatory documents is just “word salad,” one staffer recalled the presenter saying. Google Gemini can do word salad.

How good are the ideas, though?

The department has used AI to draft a still-unpublished Federal Aviation Administration rule, according to a DOT staffer briefed on the matter.

Sounds like it’ll go just great! But expertise is getting thinner on the ground:

DOT has had a net loss of nearly 4,000 of its 57,000 employees since Trump returned to the White House, including more than 100 attorneys.

How do you preserve quality and safety with the chatbot? How do you preserve attention to important details?

That’s the neat part. You don’t! Here’s DOT general counsel Gregory Zerzan:

We don’t need the perfect rule on XYZ. We don’t even need a very good rule on XYZ. We want good enough. We’re flooding the zone.

“Flooding the zone” was some-time Trump grand vizier Steve Bannon’s plan for running a culture war: “flood the zone with shit.” Zerzan thinks this is definitely the way to do transport policy. [Bloomberg, 2018]

The DOT plans to churn out new regulations as fast as possible and get staff to proofread them and spot the stupid bits. I’m sure it’ll go well.

The idea of “flooding the zone” seems to be to come up with a torrent of dumb trash and hide your worst ideas in the flood. From the White House brag document “Trump Administration Science & Technology Highlights: Year One,” we have: [White House, PDF]

replacing decades-old rules with flexible, innovation-friendly frameworks.

That is, trashing rules Trump’s donors don’t like. You know that was always the idea. Trump particularly wants less regulation on driverless cars — they don’t call them “self-driving” — hence the recent Senate hearing with Waymo.

Google is delighted to join the DOT initiative so they can sell Gemini to the government. The AI vendors want the bureaucracy to need the chatbot. [Google]

Zerzan sees this as the future of AI-enabled government. He’s calling the Department of Transportation the “point of the spear.”

Starfleet Academy

Feb. 13th, 2026 05:05 pm
sabotabby: (jetpack)
[personal profile] sabotabby
Listen, the world is a fuck and sometimes we just need to talk about silly space shows to distract from *gestures vaguely at the dumpster fire outside*. So if you nerds want a place to talk Starfleet Academy or any related Star Trek stuff you can do so here. Spoiler zone obviously. I'll be up to episode 5 by tonight.

ETA: Just realized I have been calling it Star Trek Academy this whole time, whoops.
[syndicated profile] pivot_to_ai_feed

Posted by David Gerard

Today’s Pivot to AI translation to Portuguese in Rodrigo Ghedin’s Manual do Usuário is of The Anthropic test refusal string: kill a Claude session dead. [Manual do Usuário]

Rodrigo put the test refusal string on Manual do Usuário and Claude can no longer read the site. Job done! [Bluesky]

 

Image

[syndicated profile] pivot_to_ai_feed

Posted by David Gerard

The following episode contains strong language. Artificial intelligence. Stupidity. Yes, a lot of stupidity. And a small number of Nazis.

I went on Australian tech journalist Stilgherrian’s podcast series, The 9pm Edict, and it’s just gone up! The pod is 57 minutes. [Show notes; download, MP3]

We talk about the SpaceX /xAI hookup and Grok’s bikini pics, investor money for AI running out, Moltbook, how AI makes you worse at learning, AI as gambling addiction, how AI fans can’t tell good from bad, “artificial intelligence” as a marketing term, the forthcoming Great Depression 2, AI being at the top of its technological S-curve, billionaires getting into group chats with literal neo-Nazis,the VC whose chatbot deleted his wife’s photos (and is also e/acc,another form of neo-fascist), how AI agents don’t actually work, Anthropic’s AI vending machine, and attempted chatbot scams.

The show notes are extensive.

David previously appeared on Stil’s podcast in 2021 and 2022 talking about cryptocurrency nonsense while that bubble was in full swing, and in 2024 for Pivot to AI!

Content warning: contains Australian levels of salty language.

 

[syndicated profile] pivot_to_ai_feed

Posted by David Gerard

Today’s word is “SaaSpocalypse”.  A pile of overvalued enterprise software companies’ stock price number went down, and they’re blaming AI.

The mini-bubble in software-as-a-service was always going to pop . The trigger was that stock traders were deluded into thinking your boss yelling at Claude Code could replace Salesforce. Yeah, really.

In January, Anthropic launched Claude Cowork. It’s an AI agent designed to be your workplace assistant! Anthropic called Cowork a “research preview,” which means even they didn’t think it worked yet. [Anthropic]

Then on 2 February, Anthropic released a pile of Cowork “skills” for legal offices. These claimed to do all sorts of legal jobs, like contract review. This is the AI stuff that doesn’t work already, and law firms are having to hire more lawyers to clean up after the bots. [Artificial Lawyer; GitHub]

But. this single software release of a research preview was enough to panic investors in companies making software for lawyers.  On 3 February, a whole bunch of legal software companies dropped 4% to 12%. The rout spread to non-legal SaaS companies. [Proactive Investors]

A lot of analysts saw the crash coming — they’ve considered the SaaS companies were overvalued for a while. But AI pulled the trigger: [Bloomberg, archive]

“We call it the ‘SaaSpocalypse,’ an apocalypse for software-as-a-service stocks,” said Jeffrey Favuzza, who works on the equity trading desk at Jefferies. “Trading is very much ‘get me out’ style selling.”

Private equity especially got into SaaS big time. In economics, “rentiers” are considered a parasitical drain on a working economy. Because they are. But being the rent-seeking middleman also makes a ton of money!

So SaaS companies were highly regarded, and they got very overvalued. And now private equity is cutting its software exposure as fast as possible.

The traders seem to hold the notion that AI can just replace your enterprise software spend. An AI assistant at your desk, or Claude Code writing your business software for you!

Neither of these is even slightly possible. But tell the traders that. They’ve been hearing nothing but “AI, AI, AI is coming!” for the past three years:

“The draconian view is that software will be the next print media or department stores, in terms of their prospects,” said Favuzza at Jefferies.

There’s just one problem — for all the continuously blasting hype, AI agents don’t work. They literally don’t work. They can’t work. You can tell a chatbot agent what to do, and it’ll try to do it! And it’s a hallucinating chatbot, so it’ll mess up after a time.

The vendors want to sell you on the vision, and teach you to make excuses for the bot that can’t work. Next model, bro, it’ll be amazing. This is the future! Though it sure isn’t the present.

That doesn’t matter, though. Because Anthropic sold a big promise — that agents and coding bots could get you out from under the thumb of enterprise software. Which every customer of it hates. And that includes the traders and analysts.

Renting a company the machinery their business runs on pulls in an absolute bundle! And the vendors don’t even have to make the software any good. So they … just don’t. It’s buggy, it sucks, and the users hate it. And they don’t have a choice.

So there’s a lot of resentment. Anthropic’s selling into that market.

But the promise is not possible. You can’t vibe code enterprise software if you have any requirement for accuracy or compliance.

And I do mean vibe coding. This isn’t about experienced software developers using a chatbot as an autocomplete. This is telling managers anyone can vibe code an app. They’ll think it’s 95% done when the web page looks nearly right. Then they’ll hand it off to their remaining software developer to build the actual functionality.

But the resentment at the sewage-tier quality of enterprise software is vast. The customers want nothing more than to make these parasites go away.

Unfortunately, the robot is not in fact up to the job. And the bridge troll business model is odious, but it’s also a pretty solid cash flow. The software stocks are already recovering a bit. [NYT, archive]

The monthly fee model supports a lot of software products that would otherwise not get support. But mostly it’s the part of modern life where you get nickel and dimed all day every day, and in this case it’s for rotten software that doesn’t even work well.

Everyone wants to be the bridge troll and invest in the bridge troll. But making your customers hate you this much is not a stable situation.

 

Profile

rdi: A Fender Telecaster (Default)
rdi

December 2024

S M T W T F S
1234567
8 91011121314
15161718192021
22232425262728
293031    

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Feb. 23rd, 2026 05:08 am
Powered by Dreamwidth Studios