Why AI Is Generating ‘Lowest Common Denominator’ React Code
Seth Webster doesn’t think we’re in a post-React world — or at least, not only a post-React world.
“We’re actually in a post-frontend-framework world, because the AI spits out React and nobody cares what it’s spitting out,” said the executive director of the newly created React Foundation. “We’re heading for a post-code-the-plumbing world, and we get to focus more on ,‘What [are] the delightful parts I want to create?’”
The problem is, large language models aren’t trained on the best React code, he continued; in fact, LLMs mostly have been trained on really bad React.
“They’re trained on the lowest common denominator React, which is what’s out in the world. They’re trained on the worst Svelte, they’re trained on the worst Swift, because what they’re training on is publicly available code,” he told The New Stack. “The best code in the world, oftentimes, is hidden behind private repo, and so they didn’t get to scrape that.”
Why AI Is a Middling Engineer
LLMs haven’t had access to the best code or how tools are built, he added. As a result, AI is more like a middle-of-the-road, mid-career engineer. It’s not the best engineer you’ve ever met, he said, but it’s also not the worst.
For instance, one of the things Claude likes to do is to use refs in React to track state.
“It’s not like the worst pattern we see in React, but it’s not a good pattern,” Webster said. “It’s basically indicative of the model doesn’t understand that the best way to build these things is to create an external service and integrate that using hooks with React, instead of trying to cram all the business logic into React, which is what everybody in the world does because we made it so easy to do that.”
It’s one of the mistakes the React maintainers made in React’s architecture, he added, because it’s “just too simple to put everything in React” — when developers really need to think like engineers and build the business logic a bit differently.
“If I’m doing authentication with Google or GitHub or whatever, I should have separate services that handle that,” Webster said. “I should have an authorization service, and it integrates with my different providers for different things. It handles telling the React app when someone has been logged in and so forth, when their authentication token expires, or just whatever.
“That should be integrated via hooks. You shouldn’t be putting that in your components, and the code the models have read is all crammed in the business logic, since it does not default to creating services.”
A Goal to Improve LLMs’ React Output
One of the goals he hopes to accomplish as the head of the React Foundation is to improve the React code that popular large language models generate.
That will mean a combination of Model Context Protocol (MCP) servers and evaluations, he said. Evals are used to systematically assess an LLM’s accuracy and reliability against predefined metrics and business objectives, according to the global consultancy Thoughtworks, a global consultancy. Evals, he said, help AI deliver on its “intended purpose.”
Until then, Webster said, AI needs help from developers to get the code right: “It requires a lot of guidance, and it will for a while to come.”