Polestar and Volvo are ending Polestar 3 production in Chengdu, China, and consolidating all output of the electric SUV at Volvo’s plant in South Carolina. “The move to consolidate global Polestar 3 production in Charleston help[s] generate efficiencies for both companies, whilst also underscoring our confidence in the plant and the role it plays in our manufacturing footprint,” said Hakan Samuelsson, chief executive of Volvo Cars. “The U.S. is a very important market for Volvo Cars, both to support our growth ambitions as well as a strategic production site to meet regional and export demands.” Ars Technica reports:
Volvo had a challenging 2025, with sales falling by 7 percent. Meanwhile, Polestar, which was spun out from the Swedish OEM’s performance arm into a standalone startup in 2017, had a rather good 2025, seeing a 34 percent increase in sales. So increasing the proportion of Polestar 3s to come out of South Carolina seems sensible. And as we learned last September, the midsize electric Volvo EX60 will also go into production at the South Carolina site later this year, and then we’ll see a still-unnamed hybrid Volvo in 2030.
The two companies also announced today that Volvo agreed to extend part of a shareholder loan it made to Polestar and will convert the rest into Polestar shares. Polestar will still owe Volvo $661 million, due at the end of 2031, and another $274 million will become Polestar stock now, with a further $65 million in the second quarter of the year. Since December, Polestar has also raised $1 billion through three equity financing investments.
Oracle Cuts Thousands of Jobs Across Sales, Engineering, Security
Oracle laid off thousands of employees on Tuesday as it ramps spending on AI infrastructure projects internally and with major technology partners. The layoffs were carried out via email, according to copies of the message viewed by Business Insider. The email told affected workers they would be terminated immediately and to provide a personal email for follow-up.
The cuts echo a TD Cowen forecast earlier this year, when the investment bank questioned how Oracle would finance its expanding AI datacenter buildout and suggested headcount reductions could reach 20,000 to 30,000. It is not clear how many employees were notified on Tuesday, but one screenshot that purports to show the number of internal Slack users showed a drop of 10,000 overnight.
[…] Oracle employs about 162,000 people, with 58,000 of those in the US and approximately 104,000 internationally. If the rumored cuts of 30,000 are correct, it would amount to 18 percent of the company’s workforce. According to posts from Oracle workers on LinkedIn, the cuts were spread through multiple departments around the country, with employees in Kansas, Tennessee, and Texas taking to social media to say they were among those chopped.
“This news didn’t seem to affect stock price,” adds bobthesungeek76036. "ORCL is up 6% for the day.”
Oracle has poured money into AI. Into the datacenter side even, you know the side the hyperscalers who don’t make a profit selling to the AI model companies who lose money selling the use of their models for less than it costs them to run them so those customers can also lose money.
It is everything to do with AI. The big silver lining of the whole AI bubble is that it just might destroy Oracle for the good of all mankind.
This is different than people using say Claude Code - spending $200 to get $2000 worth of compute is probably pretty good for as long as the subsidy chain lasts.
A top EU official is urging Europeans to work from home, drive less, and cut air travel as the bloc braces for a prolonged energy crisis triggered by the Gulf conflict. The European Commission is also pushing member states to accelerate renewables and other energy-security measures as oil and gas disruptions continue. Politico reports:
In a speech with echoes of the early days of the coronavirus pandemic, EU energy chief Dan Jorgensen said Europe was facing a “very serious situation” with no clear end in sight. “Even if … peace is here tomorrow, still we will not go back to normal in the foreseeable future,” he said, following an extraordinary meeting of the EU’s 27 energy ministers on Tuesday to discuss the crisis. “The more you can do to save oil, especially diesel, especially jet fuel, the better we are off,” Jorgensen said, confirming an earlier report by POLITICO that Brussels wanted Europeans to travel less.
He urged member countries to follow the advice of the International Energy Agency, which he said included “work from home where possible, reduce highway speed limits by ten kilometers [an hour], encourage public transport, alternate private car access … increase car sharing and adopt efficient driving practices.” Longer term, he urged EU countries to double down on building more renewables, saying “this must be the time we finally turn the tide and truly become energy independent.”
If they really want this to happen the government should offer tax incentives to businesses who will agree to allowing employees to work remotely. And if they have been found to violate that policy, then the incentives get revoked immediately.
With some jobs requiring physical presence aside lots of people stuck in the past keep trying to get everyone into the office…because #InsertBSReasonHere
I hear it from people working in IT all the time. “They want us in at least 3 days a a week” or worse because otherwise they get confused. How will support staff that work on remote systems work on remote systems remotely from home if they are not working on remote systems remotely from the office?!
How will vain shiny shoes managers look over the shoulder of plebs and see they’re sweating??
Office work is essential to uhm productivity? Nope. Uhm happiness? Nope. Uhm engagement? Nope. Customer satisfaction? Nope. Onboarding?! Nope.
If remote work was a problem we’d all be working from inside a data centre and manually transfer information to customers by delivering in person.
The good news is that soon AI will be able to replace these people completely because their dogma is easy to procedurally replicate.
The bad news is that AI will soon be able to monitor so many things constantly that it’ll know how many beads of sweat you might have in your nether regions to assess if you have been squeezed enough to be a “keeper” this time around…
Users who have access to this feature can go to their Google Account settings, navigate to Personal info > Email > Google Account email option. Tap on the “Change Google Account email” button to start the process of changing your username.
Users will be able to change their username only once every 12 months. Plus, they won’t be able to delete their new email address for that period of time.
The company said users’ old emails will be preserved, and the old email address will serve as an alternate address for the account. Users will be able to sign in to Google services using both the old and the new addresses.
I still have my 5 letter no number or symbol username created when creating a dial-up account through a major network more than two decades ago. Some things you outgrow but some things you are glad you never discarded.
Even if I could in theory change my email address, there is so much depending on it, it would be huge trouble to change it. For example, if you are in the UK with settled status, you need to make sure you keep your email address and phone number. Result: I’m paying BT £7.50 every month for three email addresses (up to 10 actually, but I only need three).
Instead of forget password because the browser auto-completes, then lose it; now it could be forget your e-mail address, then you inadvertently flush the browser cache and....well technical progress.
—JoshK.
Global Ban On Digital Duties Expires After Stalled Talks At WTO Meeting
An anonymous reader quotes a report from the New York Times:
A global ban on taxing digital streaming and downloads across national borders expired on Monday, after members of the World Trade Organization concluded an annual meeting without agreeing to extend it. U.S. representatives had pushed to extend the ban, which prevents the more than 160 members of the W.T.O. from issuing duties related to e-commerce. But Brazil and Turkey blocked a motion for a longer extension.
U.S. representatives excoriated the outcome as further proof of the organization’s irrelevance. The W.T.O. provides a forum for trade negotiations and setting rules for global trade. But U.S. officials have long criticized the group for its failure to police unfair trade practices by countries like China. Over the past year, the Trump administration has further abandoned W.T.O. by issuing its own global framework of tariffs instead. […] Brazil had pushed for a two-year extension of the moratorium on e-commerce duties, while the United States wanted a permanent one. The countries couldn’t come to a compromise, but negotiations are set to continue in Geneva this spring. W.T.O. members also failed to reach an agreement on future reforms for the organization.
Bernd Lange, the chair of the international trade committee for the European Parliament, wrote in a post on X that “supporters of the multilateral trading system are waking up with a hangover.”
“We knew that a breakthrough might not materialize, but that doesn’t make it any less painful,” he wrote, adding that “without an agreement to extend moratorium on digital tariffs, a period of great uncertainty could soon begin for businesses and consumers.”
Jonathan McHale, the vice president of digital trade at the Computer & Communications Industry Association, called the outcome “deeply disappointing.” He said: “For more than two decades, W.T.O. members have recognized that imposing tariffs on electronic transmissions would be counterproductive, but allowed the issue to become a negotiating football.”
U.S. representatives excoriated the outcome as further proof of the organization’s [WTO’s] irrelevance.
I hate this administration’s general anti-American attitude, extreme thirst for growing national debt, and overall lawless criminality, but the above quote nevertheless excites me. I wish to subscribe to the aforementioned representatives’ newsletter.
If we don’t need WTO, then I bet we don’t need WIPO. And if we don’t need to be a signatory of the WIPO treaty anymore, then we don’t need DMCA.
Hey Pedoph— er I mean— let me start over.
Hey glorious leader Trump, people are saying you’re too chicken to tell Johnson and Thune to repeal DMCA. Surely that’s not true. Are you going to let them all get away with calling you chicken?
I imagine the WTO is a lousy tool for a country that can’t burn bridges fast enough while demanding everyone do as they’re told without reciprocity.
I think the USA is going to have to learn the hard way that the international economy has been very tilted in its direction for decades, and all the things the people in control don’t understand and hate were the very things maintaining the tilt.
Once that’s gone, I don’t think the circumstances for restoring that tilt will exist. It’s gone, and the average American is going to be poorer when the dust settles.
Australia Readies Social Media Court Action Citing Teen Ban Breaches
Australia is preparing possible court action against major social media platforms that are failing to enforce the country’s social media ban on under-16s. “Three months after the ban came into effect, the eSafety Commissioner said it was probing Meta’s Instagram and Facebook, Google’s YouTube, Snapchat and TikTok for possible breaches of the law,” reports Reuters. From the report:
Communications Minister Anika Wells said the government was gathering evidence “so that the eSafety Commissioner can go to the Federal Court and win.” “We have spent the summer building that evidence base of all the stories that no doubt you have all heard … about how kids are getting around that,” Wells told reporters in Canberra. The legal threat is a striking change of tone from a government which had hailed tech giants’ shows of cooperation when the ban went live in December.
Under the Australian law, platforms must show they are taking reasonable steps to keep out underage users or face fines of up to $34 million per breach, something eSafety would need to pursue in a civil court. The regulator previously said it would only take enforcement action in cases of systemic noncompliance. But in its first comprehensive compliance report since the ban took effect, eSafety said measures taken by the platforms were substandard and it would make a decision about next steps by mid-year. “We are now moving âinto an enforcement stance,” said commissioner Julie Inman Grant in a statement.
The regulator reported major compliance gaps, including platforms prompting children who had previously declared ages under 16 to do fresh age checks, allowing repeated attempts at age-assurance tests until a child got a result over 16 and poor pathways for people to report underage accounts. Some platforms did not use age-inference, which estimates age based on someone’s online activity, and some only used age-assurance measures like photo-based checks after a user tried to change their age, rather than at sign-up. That made it “likely many Australian children aged under 16 have been able to create accounts on age-restricted social media platforms by simply declaring they are 16 or older”, the regulator said. Nearly one-third of parents reported their under-16 child had at least one social media account after the ban took effect, of which two-thirds said the platform had not asked the child’s age, it added.
Claude Code’s Source Code Leaks Via npm Source Maps
A security researcher has leaked a complete repository of source code for Anthropic’s flagship command-line tool. The file listing was exposed via a Node Package Manager (npm) mapping, with every target publicly accessible on a Cloudflare R2 storage bucket.
There’s been a number ofdiscoveries as people continue to pore overthe code. The DEV Community outlines some of the leak’s most notable architectural elements and the key technical choices:
Architecture Highlights The Tool System (~40 tools): Claude Code uses a plugin-like tool architecture. Each capability (file read, bash execution, web fetch, LSP integration) is a discrete, permission-gated tool. The base tool definition alone is 29,000 lines of TypeScript.
The Query Engine (46K lines): This is the brain of the operation. It handles all LLM API calls, streaming, caching, and orchestration. It’s by far the largest single module in the codebase.
Multi-Agent Orchestration: Claude Code can spawn sub-agents (they call them “swarms”) to handle complex, parallelizable tasks. Each agent runs in its own context with specific tool permissions.
IDE Bridge System: A bidirectional communication layer connects IDE extensions (VS Code, JetBrains) to the CLI via JWT-authenticated channels. This is how the “Claude in your editor” experience works.
Persistent Memory System: A file-based memory directory where Claude stores context about you, your project, and your preferences across sessions.
Key Technical Decisions Worth Noting Bun over Node: They chose Bun as the JavaScript runtime, leveraging its dead code elimination for feature flags and its faster startup times.
React for CLI: Using Ink (React for terminals) is bold. It means their terminal UI is component-based with state management, just like a web app.
Zod v4 for validation: Schema validation is everywhere. Every tool input, every API response, every config file.
~50 slash commands: From /commit to /review-pr to memory management — there’s a command system as rich as any IDE.
Lazy-loaded modules: Heavy dependencies like OpenTelemetry and gRPC are lazy-loaded to keep startup fast.
That surprised me, too. TypeScript is a very poorly-congealed (“designed” seems a bit strong) language.
Of the two popular scripting languages - python and ruby - python probably makes more sense as you can compile into actual binaries if you want.
For speed and parallel processing, which I’d assume they’d want, they’d be better off with Tcl or Erlang, both of which are much much better suited to this sort of work.
Its very odd that Slashdot would report on this, but with how much the Anthropic/Claude Code sub-reddits have been blowing up with them completely screwing over users with their ridiculous usage limits this past week, remain totally radio silent. Suspicious even.
I will never give Anthropic the time of day after the rug they’ve pulled on all of us. Screw them.
Euro-Office is a new open-source project supported by several European companies that aims to offer a “truly open, transparent and sovereign solution for collaborate document editing,” using OnlyOffice as a starting point. The project is positioned around European digital independence and familiar Office-style editing, though it has already drawn pushback from OnlyOffice over alleged licensing violations. “The company behind OnlyOffice is also based in Russia, and Russia is still heavily sanctioned by most European nations due to the country’s ongoing invasion of Ukraine,” adds How-To Geek. From the report:
Euro-Office is a new open-source project supported by Nextcloud, EuroStack, Wiki, Proton, Soverin, Abilian, and other companies based in Europe. The goal is to build an online office suite that can open and edit standard Microsoft Office documents (DOCX, PPTX, XLSX) and the OpenDocument format (ODS, ODT, ODP) used by LibreOffice and OpenOffice. The current design is remarkably close to Microsoft Office and its tabbed toolbars, so there shouldn’t be much of a learning curve for anyone used to Word, Excel, or PowerPoint.
Importantly, Euro-Office is only the document editing component. It’s designed to be added to cloud storage services, online wikis, project management tools, and other software. For example, you could have some Word documents in your Nextcloud file storage, and clicking them in a browser could open the Euro-Office editor. That way, Nextcloud (or Proton, or anyone else) doesn’t have to build its own document editor from scratch.
Euro-Office is based on OnlyOffice, which is open-source under the AGPL license. The project explained that “Contributing is impossible or greatly discouraged” with OnlyOffice’s developers, with outside code changes rarely accepted, so a hard fork was required. The company behind OnlyOffice is also based in Russia, and Russia is still heavily sanctioned by most European nations due to the country’s ongoing invasion of Ukraine. The project’s home page explains, “A lot of users and customers require software that is not potentially influenced or controlled by the Russian government.”
As for why OnlyOffice was chosen over LibreOffice, the project simply said: “We believe open source is about collaboration, and we look for opportunities to integrate and collaborate with the LibreOffice community and companies like Collabora.”
UPDATE: Slashdot reader Elektroschock shares a statement from OnlyOffice CEO Lev Bannov, expressing his concerns about the Euro-Office inclusion of its software with trademarks removed: “We liked the AGPL v3 license because its 7th clause allows us to ensure that our code retains its original attributes, so that users are able to clearly identify the developers and the brand behind the program…”
Bannov continued: “The core issue here isn’t just about what the AGPL license states, but about the additional provisions we, as the authors, have included. This is a critical distinction, even if some may argue otherwise. We firmly assert that the Euro-Office project is currently infringing on our copyright in a deliberate and unacceptable manner.”
“As the creators of ONLYOFFICE, we want to make our position unequivocally clear: we do not grant anyone the right to remove our branding or alter our open-source code without proper attribution. This principle is non-negotiable and will never change. We demand that the Euro-Office project either restore our branding and attributions or roll back all forks of our project, refraining from using our code without proper acknowledgment of ONLYOFFICE.”
“As for why OnlyOffice was chosen over LibreOffice, the project simply said: “We believe open source is about collaboration, and we look for opportunities to integrate and collaborate with the LibreOffice community and companies like Collabora.”"
Ok, since they just refuse to answer the question, does anyone else know why OnlyOffice was chosen over LibreOffice?
Just a guess but it seems like the Euro-Office team is keen on violating a license or two, and perhaps they found it easier/simpler to violate the OnlyOffice license.
IANAL but It is the assumption of OnlyOffice that there is a violation. EuroOffice in commit message on Github
Remove unenforceable and non-obligatory Section 7 additions from core
Under AGPLv3 Section 7, downstream recipients may remove terms that constitute “further restrictions” beyond what Section 7(a)-(f) permits, as affirmed by the FSF.
Logo retention requirement (Section 7(b)): Section 7(b) permits requiring preservation of “legal notices or author attributions”. A product logo is a trademark/brand element, not a legal notice or author attribution. It therefore exceeds the scope of 7(b), qualifies as a “further restriction” under Section 10, and may be removed.
Trademark disclaimer (Section 7(e)): Purely declaratory — the AGPLv3 does not grant trademark rights in any case. The disclaimer creates no affirmative obligation on the licensee and removing it changes no rights or obligations. There is no legal basis requiring its preservation.
Apparently AGPLv3 allows some additions in Section 7. What is allowed is defined in a-f. OnlyOffice feels that 7-b allows them to demand that the attribution means that they can demand the Logo and brand elements need to stay. Euro-Office apparently disagrees.
Euro-Office also claims that 7e gives no legal basis for it.
I can’t assess who is right.
As to why OnlyOffice over Collabora. In my experience, as OnlyOffice uses the OOXML format of MS, there are a few less issues with MS Office files. In my experience there are indeed a few less lay-out issues. Another thing I once notices was embedded media files in a Powerpoint file that did work in OnlyOffice and not in Libre.
Although OnlyOffice is now officially based in the EU, there remains some doubts on them as they originated in Russia.
I like Microsoft office, better than Libreoffice, which I used a few years before I got Ms office at home through work. But since Snowden, it was pretty clear to me that there probably were intentional back doors.
Best case, this was only used for (inter)national security. Worst case? This was abused to do industrial espionage, extort people, … With Trump? You bet they will abuse it without a second thought. We are switching back to Libreoffice now. Too bad though. I miss PowerPoint and OneDrive. Oh well… we will adapt. It is for the greater good.
The Trump administration on Monday issued a long-awaited proposed rule to open up retirement plans to alternative assets, paving the way for private equity and cryptocurrencies to be added to 401(k) accounts. The measure, announced by the U.S. Department of Labor, is intended to ease longstanding barriers to incorporating these less liquid and less transparent assets into American retirement plans. It follows an executive order from President Donald Trump last summer and could clear the way for alternative asset management firms to tap a large new source of capital.
Industry groups have argued private market investments can enhance long-term returns and diversification for retirement savers, while skeptics warn higher fees, complexity and limited liquidity could limit those gains and pose risks for retail investors. Some private market funds that are already available to wealthier individual investors have shown signs of strain in recent months. Private credit funds known as business development companies have seen a wave of withdrawals. Treasury Secretary Scott Bessent said the proposed rule was “an initial step” and aimed to be “mindful of the importance of protecting retirement assets.”
The guidance lays out how plan trustees, who have a legal fiduciary duty to act in the best interest of members, can incorporate these assets. They would have to “objectively, thoroughly, and analytically consider, and make determinations on factors including performance, fees, liquidity, valuation, performance benchmarks, and complexity,” the DOL said. Trustees who abide by them will be granted safe harbor that protects them from lawsuits, it added. The Supreme Court agreed earlier this year to hear one such case filed in 2019 by a former Intel employee claiming trustees made “imprudent” decisions by investing in hedge funds and private equity funds.
Of bad stock into your 401k. The YouTuber Patrick Boyle has a detailed video on the subject.
Basically SpaceX is going to be valued at 1.5 trillion. However it is impossible for it to reach that valuation in the real world.
SpaceX already has all the launch customers that can possibly get even under the best case scenario. And in unfavorable administration would almost certainly start looking for alternatives because Elon meddled in a war.
So the only possible growth sector for SpaceX is launching its own satellites, specifically the ones for internet.
But that’s a dead end too because there aren’t enough customers who can afford high-speed internet and also do not have access to some form of landline based internet like cable or DSL
the only other growth sector would be AI bullshit but Elon has lost most of his engineers to other companies. SpaceX got this huge boost because Elon had a mystique and he was talking about going to Mars so a shitload of rocket engineers took lower pay than they could get in any other job and work longer hours to work for spacex. That isn’t happening with elon’s AI companies. So he can’t compete and the stuff he’s building is barely better than what you could build yourself and run off your own GPU.
Everybody knows this, at least everybody who is investing that kind of money, so in order to get the kind of money he wants he’s doing a weird stock scheme that limits access to the stock in order to drive up the price. Basically a few insiders will get all the profit and it’s going to leave a huge amount of worthless stock that needs to be sent somewhere.
Normally it would be dumped into public pensions but those have been maxed out with bad stock already. So we are 401K is going to get hammered.
This is just the largest of many scams that are going to loot your retirement and there’s basically nothing you can do about it except vote for pro-consumer politicians who want to regulate Wall Street but that’s going to be annoying people like Elizabeth Warren and AOC and Bernie Sanders and frankly people don’t like them… And in politics likeability is basically everything now.
What I’m saying is that if you are retiring or even if you’re just retired you’re a fucked. You have money and somebody wants it and they’re going to get it
>>I don’t see why I care about government trying to protect people from themselves with this one?
Because when they lose everything and have nothing left when to retire on, guess who will end up paying to bail them out? It’s not the scammers who got rich selling them snake oil, it’s the rest of us. And don’t think they won’t get a bail out; retired people are an important voting block and will support whoever promises them the most.
Bitcoin seems to be a 4 year up and down price cycle, and you don’t want to get caught holding it the next time it crashes in value again.
Well this seems like a key asset to have in a long-term growth strategy then.
Remind me why I would want to put my retirement savings into an investment vehicle that can disappear overnight?
You’re looking at it wrong. It’s not about you or your wealth potential. It’s about making sure what wealth you have available to you is openly available to some already far richer asshole to pilfer. That’s the end-goal with this shit, to make sure the middle and lower classes have *ALL* their retirement funds stripped and handed to the wealthy. And this crypto-scam 401k link will be a very, VERY efficient way to make that happen.
Quadratic Gravity Theory Reshapes Quantum View of Big Bang
Researchers at the University of Waterloo say a new “quadratic quantum gravity” framework could explain the universe’s rapid early expansion without adding extra ingredients to Einstein’s theory by hand. The idea is especially notable because it makes testable predictions, including a minimum level of primordial gravitational waves that future experiments may be able to detect. “Even though this model deals with incredibly high energies, it leads to clear predictions that today’s experiments can actually look for,” said Dr. Niayesh Afshordi, professor of physics and astronomy at the University of Waterloo and Perimeter Institute (PI). “That direct link between quantum gravity and real data is rare and exciting.” Phys.org reports:
The research team found that the Big Bang’s rapid early expansion can emerge naturally from this simple, consistent theory of quantum gravity, without adding any extra ingredients. This early burst of expansion, often called inflation, is a central idea in modern cosmology because it explains why the universe looks the way it does today.
Their model also predicts a minimum amount of primordial gravitational waves, which are tiny ripples in spacetime geometry created in the first moments after the Big Bang. These signals may be detectable in upcoming experiments, offering a rare chance to test ideas about the universe’s quantum origins.
[…] The team plans to refine their predictions for upcoming experiments to explore how their framework connects to particle physics and other puzzles about the early universe. Their long-term goal is to strengthen the bridge between quantum gravity and observational cosmology.
“Quadratic gravity (QG) is an extension of general relativity obtained by adding all local quadratic-in-curvature terms to the Einstein–Hilbert Lagrangian.[1] Doing this makes the theory renormalizable.[1] This is one of numerous alternatives to general relativity.”
Wouldn’t you have to be on the very edge of the universe to feel ancient gravitational waves? It’s not like they bounce like sound waves.
There is no edge, every point is at the precise center including your two eyes. Because light, gravity waves, and causality travel at a single fixed speed, the further something is the farther back in time it is until you reach a point where you cannot see beyond because it is too far back in time and approaches the Big Bang. Gravity waves from the Big Bang will be rippling through all points always just as you can look in any direction and see the microwave background which is the Big Bang but stretched out to the point it’s far cooler and of longer wavelengths.
And don’t they dissipate the further they get from the source, making them undetectable?
Gravity waves are fundamentally undetectable, even in principle. If you want a nearly exact example you are probably familiar with think of two floating bits on a still lake. Perception only occurs along the surface of the water, they cannot see or measure or perceive up and down. When a ripple passes the two bits move toward and away from each other as the surface stretches and shrinks to accommodate the wave and that is the distortion that is measured not the wave itself. It boils down to the second derivative of the mass quadrupole moment tensor and it falls off linearly with distance so is not like other directly measured waves that fall off exponentially.
And how does this explain the ridiculous notion that matter traveled faster than the speed of light shortly after the big bang?
The universe is the same everywhere at the largest scales including being at the exact temperature despite not being casually connected if you look at how causality works on our scales, times, and energy levels. The most reasonable thing is that the universe was once all touching in close contact, even points 90 billion light years away from each other. The universe is also expanding the same everywhere on the largest scales so if you rewind time everything goes back to one point even if there isn’t a “center”. So the crazy thing is to look at all the evidence for it (many other examples of measurement also confirm this is how it looks) and say it’s all wrong because it does not meet personal expectations. That’s not how science works.
Relativity = gravity is represented by the curvature of spacetime. Curvature is linear, R. The formula treats curvature linearly. As things get closer and curvature spikes, the math just scales at a 1:1 rate
Quadratic gravity = Squares the curvature. Doesn’t really change things much when everything is far apart, but heavily changes things when everything is close together.
Pros: prevents infinities and other problems when trying to reconcile quantum theory with relativity (“makes the theory renormalizable”). E.g. you don’t want to calculate “if I add up the probabilities of all of these possible routes to some specific event, what are the odds that it happens?” -> “Infinity percent odds”. That’s… a problem. Renormalization is a trick for electromagnetism that prevents this by letting the infinities cancel out. But it doesn’t work with linear curvature - gravitons carry energy, which creates gravity, which carries more energy… it explodes, and renormalization attempts just create new infinities. But it does work with quadratic curvature - it weakens high-energy interactions and allows for convergence.
Cons: Creates “ghosts” (particles with negative energies or negative probabilities, which create their own problems). There’s various proposed solutions, but none that’s really a “eureka!” moment. Generally along the lines of “they exist but are purely virtual and don’t interact”, “they exist but they’re so massive that they decay before they can interact with the universe”, “they don’t exist, we’re just using the math out of bounds and need a different representation of the same”, “If we don’t stop at R^2 but also add in R^3, R^4, … on to infinity, then they go away”. Etc.
The theory isn’t new, BTW. The idea is from 1918 (just a few years after Einstein’s theory of General Relativity was published), and the work that led to the “Pros” above is from 1977.
Scientists Shocked To Find Lab Gloves May Be Skewing Microplastics Data
“We may be overestimating microplastics, but there should be none,” said Anne McNeil, senior author of the study and U-M professor of chemistry, macromolecular science and engineering. “There’s still a lot out there, and that’s the problem.” From the report:
Researchers found that these gloves can unintentionally transfer particles onto lab tools used to analyze air, water, and other environmental samples. The contamination comes from stearates, which are not plastics but can closely resemble them during testing. Because of this, scientists may be detecting particles that are not true microplastics. To reduce this issue, U-M researchers Madeline Clough and Anne McNeil recommend using cleanroom gloves, which release far fewer particles.
Stearates are salt-based, soap-like substances added to disposable gloves to help them separate easily from molds during manufacturing. However, their chemical similarity to certain plastics makes them difficult to distinguish in lab analyses, increasing the risk of false positives when studying microplastic pollution.
“For microplastics researchers who have these impacted datasets, there’s still hope to recover them and find a true quantity of microplastics,” said researcher and recent doctoral graduate Madeline Clough. “This field is very challenging to work in because there’s plastic everywhere,” McNeil said. “But that’s why we need chemists and people who understand chemical structure to be working in this field.”
The findings have been published in the journal Analytical Methods.
Look they don’t say thay the microplastics are all from their gloves, they just say that their estimate might have been skewed by the gloves ie “plz be aware the tour result maey need more sctutany” . No need to trow all their results out at ones
Actually - there is a very valid reason to throw previous results out of the window. In science, a contaminated sample completely invalidates the result. You can’t rely on any findings built on false premises.
We still know microplastics are bad because deliberately introducing them to lab animals produces all sorts of bad stuff. Our samples might be skewed towards showing more of them in more places than there actually are, so perhaps things are not quite as bad yet. On the other hand, the same bias could mean that micro plastics becomes a problem at smaller quantities than previously thought.
So, when we say microplastics, we really mainly mean nanoplastics - the stuff made from, say, drinking hot liquids from low-melting-point plastic containers. And yeah, they very much look like a problem. The strongest evidence is for cardiovascular disease. The 2024 NEJM study for example found that for patients with above-threshold levels of nanoplastics in cartoid artery plaque were 4,5x more likely to suffer from a heart attack. Neurologically, they cross the brain-blood barrier (and quite quickly). A 2023 study found that they cause alpha-synuclein to misfold and clump together, a halmark of Parkinsons and various kinds of dementia. broadly, they’re associated with oxidative stress, neuroinflammation, protein aggregation, and neurotransmitter alterations. Oxidative stress is due to cells struggling to break down nanoplastics in them. They’re also associated with immunotoxicity, inflammatory bowel disease, and reproductive dysfunction, including elevating inflammatory markers, impairing sperm quality, and modulating the tumor microenvironment. With respect to reproduction, they’re also associated with epigenetic dysregulation, which can lead to heritable changes.
And here’s one of the things that get me - and let me briefly switch to a different topic before looping back. All over, there’s a rush to ban polycarbonate due to concerns over a degradation product (bisphenol-A), because it’s (very weakly) estrogenic. But typical effective estrogenic activity from typical levels of bisphenol-A are orders of magnitude lower than that of phytoestrogens in food and supplements; bisphenol-A is just too rare to exert much impact. Phytoestrogens have way better PR than bisphenol-A, and people spend money buying products specifically to consume more of them. Some arguments against bisphenol-A focus on what type of estrogenic activity it can promote (more proliferative activity), but that falls apart given that different phytoestrogens span the whole gamut of types of activation. Earlier research arguing for an association with estrogen-linked cancer seems to have fallen apart in more recent studies. It does seem associated with PCOS, but it’s hard to describe it as a causal association, because PCOS is associated with all sorts of things, including diet (which could change the exposure rate vs. non-PCOS populations) and significant hormonal changes (which could change the clearance rate of bisphenol-A vs. non-PCOS populations). In short, bisphenol-A from polycarbonate is not without concern, but the concern level seems like it should be much lower than with nanoplastics.
Why bring this up? Because polycarbonate is a low-nanoplastic-emitting material. It is a quite resilient, heat tolerant plastic, and thus - being much further from its glass transition temperature - is not particularly prone to shedding nanoplastics. By contrast, its replacements - polyethylene, polypropylene, polyethylene terephthate, etc - are highly associated with nanoplastic release, particularly with hot liquids. So by banning polycarbonate, we increase our exposure to nanoplastics, which are much better associated with actual harms. And unlike bisphenol-A, which is rapidly eliminated from the body, nanoplastics persist. You can’t get rid of them. If some big harm is discovered with bisphenol-A that suddenly makes the risk picture seem much bigger than with nanoplastics, we can then just stop using it, and any further harm is gone. But we can’t do that with nanoplastics.
People seriously need to think more about substitution risks when banning products. The EU in p
A bit more about the latter. Beyond organophosphates, the main other alternative is pyrethroids. These are highly toxic to aquatic life, and they’re contact poisons to pollinators just landing on the surface (some anti-insect clothing is soaked in pyrethrin for its effect). Also, neonicotinoids are often applied as seed coatings (which are taken up and spread through the plant), which primarily just affect the plant itself. Alternatives are commonly foliar sprays. This means drift to non-target impacts as well, such as in your shelterbelts, private gardens, neighbors’ homes, etc. You also have to use far higher total pesticide quantities with foliar sprays instead of systematics, which not only drift, but also wash off, etc. Neonicotinoids can impact floral visitors, with adverse sublethal impacts but e.g. large pyrethroid sprayings can cause massive immediate fatal knockdown events of whole populations of pollinators.
Regrettable substitution is a real thing. We need to factor it in better. And that applies to nanoplastics as well.
AI Data Centers Can Warm Surrounding Areas By Up To 9.1C
An anonymous reader quotes a report from New Scientist:
Andrea Marinoni at the University of Cambridge, UK, and his colleagues saw that the amount of energy needed to run a data centre had been steadily increasing of late and was likely to “explode” in the coming years, so wanted to quantify the impact. The researchers took satellite measurements of land surface temperatures over the past 20 years and cross-referenced them against the geographical coordinates of more than 8400 AI data centers. Recognizing that surface temperature could be affected by other factors, the researchers chose to focus their investigation on data centers located away from densely populated areas.
They discovered that land surface temperatures increased by an average of 2C (3.6F) in the months after an AI data center started operations. In the most extreme cases, the increase in temperature was 9.1C (16.4F). The effect wasn’t limited to the immediate surroundings of the data centers: the team found increased temperatures up to 10 kilometers away. Seven kilometers away, there was only a 30 percent reduction in the intensity. “The results we had were quite surprising,” says Marinoni. “This could become a huge problem.”
Using population data, the researchers estimate that more than 340 million people live within 10 kilometers of data centers, so live in a place that is warmer than it would be if the data centre hadn’t been built there. Marinoni says that areas including the Bajio region in Mexico and the Aragon province in Spain saw a 2C (3.6F) temperature increase in the 20 years between 2004 and 2024 that couldn’t otherwise be explained.
University of Bristol researcher Chris Preist said the findings may be more complicated than they look. “It would be worth doing follow-up research to understand to what extent it’s the heat generated from computation versus the heat generated from the building itself,” he says. For example, the building being heated by sunlight may be part of the effect.
The findings of the study, which has not yet been peer-reviewed, can be found on arXiv.
“These results are dramatically impressive, especially considering that the typical LST increase caused by the quintessential example of compound of anthropogenic activities - the urban heat island effect - has been estimated in the 4 and 6 [degrees] C interval. This apparent step function emphasize the clear effect of AI hyperscalers on their surrounding areas, so much that it can match the impact of “islands” of higher temperatures: therefore, we call this the data heat island effect.”
The comparison is always a cost/benefit analysis. Steel mills actually deliver a useful product, so the costs of their existence is justified. An “AI” datacenter produces hallucinations, its existence is completely unjustifiable.
Since concrete buildings and paved parking lots are part of the urban heat island effect, yes it does.
The question could be phrased more generally: how much of the *data* heat island effect is because it’s a data center and not another type of building.
“Recognizing that surface temperature could be affected by other factors, the researchers chose to focus their investigation on data centers located away from densely populated areas.”
You’ll have to look at the study data to see if that completely addresses your concern, but unsurprisingly the professional researchers have put some thought into what controls a study like this might need.
If global warming is indeed a hoax, why can’t any real ‘murican researchers disprove it? You’d be hailed as a hero to the MAGA camp and might be allowed to kiss dear leaders ring. Everyone keeps coming to the same conclusion except for a few fringe nutjobs who start going off about angels and the bible.
Microsoft Plans To Build 100% Native Apps For Windows 11
Microsoft is reportedly shifting Windows 11 app development back toward fully native apps. Rudy Huyn, a Partner Architect at Microsoft working on the Store and File Explorer, said in a post on X that he is building a new team to work on Windows apps. “You don’t need prior experience with the platform.. what matters most is strong product thinking and a deep focus on the customer,” he wrote. “If you’ve built great apps on any platform and care about crafting meaningful user experiences, I’d love to hear from you.” Huyn later said in a reply on X that the new Windows 11 apps will be “100% native.” TechSpot reports:
The description stands out at a time when many of Microsoft’s built-in tools, including Clipchamp and Copilot, rely on web technologies and Progressive Web App architectures. The company’s commitment to native performance suggests that some long-standing frustrations around responsiveness, memory use, and interface consistency could finally be addressed.
For Windows developers, Huyn’s comments hint at a change in direction. Microsoft’s recent development priorities have leaned heavily on web-based approaches, with Progressive Web Apps (PWAs) replacing or supplementing many native programs. […] Exactly which applications will be rebuilt, or how strictly “100% native” will be enforced, remains unclear. Some current Microsoft apps classified as native still depend on WebView for specific features. But the renewed emphasis already has developers paying attention.
Finally? I’m tired of seeing apps in Windows 11 that are an integral part of the operating system and should therefore be native, but were built with that total, complete, and absolute shit that is “web apps”. “Web apps” only make sense when you really need independence from the OS to the point of accepting a loss of performance and very bad resource usage. Web apps have absolutely no place on where they would never be used on another operating system.
Re:They don’t want to make other OSes more attract
Just a few years ago, an app with almost the same functionality as WhatsApp (though it wouldn’t have video or audio, since that wasn’t feasible back then on dial-up or DSL connections) wouldn’t have used more than 50MB even under heavy use. Nowadays, however, an app with the same goals easily exceeds 1.5GB of RAM.
1.5 GB of RAM for an instant messaging app. It was possible to run the entire Windows XP system plus user applications on 128MB of RAM… 256MB was a luxury.
And for those complete idiots who keep going on and on about how “memory that isn’t used is wasted memory,” I have two things to say to those clowns:
1) There is absolutely no reason to use 1GB of RAM for a task that you can easily handle with just 10MB of RAM. Just because your computer has 32GB of RAM doesn’t mean you have to use all of it just for your application;
2) Your application isn’t the only thing running on the user’s computer. What happens if the dozens of processes running on the user’s computer all have the same idiotic idea of trying to reserve all the computer’s memory for themselves?
You’re on iOS right? Disable “smart quotes” in keyboard settings will fix it. One day slashdot might vibe code some fixes and join the modern web… or they might not.
I used the hell out of MS Works. It did what I needed quickly and without the HUGE bloat of M$Office and Word. Though I came from a GML background so it was easy…
I’ve programmed Win32 for decades and while it was fine for the time, much of the user facing APIs are obsolete for modern GUI development and some of the non-user facing stuff too. But Microsoft really hasn’t produced a credible replacement for it and has shat out a succession of technologies one after the other that devs are supposed to use before Microsoft abandons them for the next - Win32 (and layers on top like MFC), WinForms, WPF (and Silverlight), UWP, Windows App SDK / WinUI.
Some of these technologies are overlapping, but each was intended to coral devs into making Metro apps or Windows Store apps and burn their bridges in the process. It went down like a lead balloon. Now they’re dialing back trying to make WinUI somewhat platform agnostic to the version of Windows its running on but who knows if it will stick. It’s not the only pain point because Microsoft even extended the C++ language to deal with these APIs with new types like “ref”, “partial” and hat notation to deal with garbage collected objects, auto generated classes and other things that also impedes portability.
So it’s no wonder that app developers have gone for web apps (and QT) because it’s makes it easier to write portable apps and acts as insulation from Microsoft’s mercurial view of the world.
After 16 Years and $8 Billion, the Military’s New GPS Software Still Doesn’t Work
An anonymous reader quotes a report from Ars Technica:
Last year, just before the Fourth of July holiday, the US Space Force officially took ownership of a new operating system for the GPS navigation network, raising hopes that one of the military’s most troubled space programs might finally bear fruit. The GPS Next-Generation Operational Control System, or OCX, is designed for command and control of the military’s constellation of more than 30 GPS satellites. It consists of software to handle new signals and jam-resistant capabilities of the latest generation of GPS satellites, GPS III, which started launching in 2018. The ground segment also includes two master control stations and upgrades to ground monitoring stations around the world, among other hardware elements.
RTX Corporation, formerly known as Raytheon, won a Pentagon contract in 2010 to develop and deliver the control system. The program was supposed to be complete in 2016 at a cost of $3.7 billion. Today, the official cost for the ground system for the GPS III satellites stands at $7.6 billion. RTX is developing an OCX augmentation projected to cost more than $400 million to support a new series of GPS IIIF satellites set to begin launching next year, bringing the total effort to $8 billion.
Although RTX delivered OCX to the Space Force last July, the ground segment remains nonoperational. Nine months later, the Pentagon may soon call it quits on the program. Thomas Ainsworth, assistant secretary of the Air Force for space acquisition and integration, told Congress last week that OCX is still struggling.
The GAO found the OCX program was undermined by “poor acquisition decisions and a slow recognition of development problems.” By 2016, it had blown past cost and schedule targets badly enough to trigger a Pentagon review for possible cancellation.
Officials also pointed to cybersecurity software issues, a “persistently high software development defect rate,” the government’s lack of software expertise, and Raytheon’s “poor systems engineering” practices. Even after the military restructured the program, it kept running into delays and overruns, with Ainsworth telling lawmakers, “It’s a very stressing program” and adding, “We are still considering how to ensure we move forward.”
Subject is the question. Where is DOGE on the big stuff? The Pentagon wastes more every month in fraud, waster, and abuse than USAID spends annually. But somehow charity gets the axe and Ratheon keeps getting multi-billion dollar contracts, no strings attached. Can anyone put aside the woke distraction and look at the serious problems?!?
Failing audits is frankly independent from failing programs. The audits usually have problems tracking money flows and then property within the government. The contractor’s expenditures are closely monitored. That doesn’t mean they’re in-line, but they’re auditable. And when the audit discovers problems, there are ways for the government to respond. I’ve seen those applied rather frequently.
One common pattern is a program starts down the wrong path, and blows initial cost/schedule/performance. But that capability is needed badly (often because its predecessor program didn’t deliver). So the Service piles on more requirements and ‘readjusts the baseline’ for additional funding, because “if we don’t get it in this Program of Record, it’ll be at least a decade before we can start a new Program of Record to get what we need.” That just adds requirements to something that is already behind. If I had to guess what happened here, I bet there’s some of that flavor over the execution. In my experience, most programs started with the combination of unachievable or under-specified requirements AND unachievable/unrealistic schedule.
(A ‘Program of Record’, by the way, consists of an approved requirements document, an approved POM budget for the next 5 years showing the RDTE money, the OPA purchasing money, and the OMA maintenance money FOR EACH FISCAL YEAR. If you run out of RDTE money but haven’t finished the design, you’re in trouble. The third element is the approved procurement strategy, that says how you’ll buy it. That includes the kind of contract, firm fixed price or cost plus, the kinds of oversight, when and how prototypes will be delivered and tested, etc.)
RTX Corporation, formerly known as Raytheon, hasn’t produced a single thing of halfway decent quality in many decades and only hires the shittiest engineers.
Reagan was on a privatization kick. It resulted in wonderful growth for the Beltway Bandits. Another reason privatization does not work for government is because government is not private enterprise that can decide what markets to enter and which to exit. Programs are mandated by law. Sure laws can change but it is a long arduous path. And you wouldn’t want it any other way. Changing things on a whim has brought current U.S. to its knees, and the damage appears to be long lasting.
Want to see privatization at work? Look at the U.S. health system. Those insurance companies use actuaries to determine who gets covered. A good team of actuaries can put a price on your grandmother and her cat and will if you ask them to. As a consequence, we have a health system which can send you to the poor house in under a year because of a medical condition.
“The Pentagon Wars” by Col. James F Burton. Burton was part of the 1980s “Fighter Mafia” who got the F-16 built, against Pentagon tendencies for every new plane to be twice the weight and twice the cost of the last one. (The F-35 continues the tradition.)
John Boyd was part of those battles, recounted in Comer’s excellent biography of him, titled “Boyd: The Fighter Pilot Who Changed the Art of War.”
They were the ones who publicized the $400 hammer and $600 toilet seat.
While I like those stories and many pointed out serious problems in government procurement, some also fail to tell the whole story because the headline is what someone wants since it causes outrage.
We used bolts that cost a lot of money and looked the same as a 10 cent one from a hardware store, but ours were designed to perform to an exact spec, traceable to the ore, and tested to ensure they meet specs. You don’t want that bolt to fail to performa at a critical moment when you are above, on, or below the ocean. We had bronze tools that cost a lot but looked like ordinary tools because sparks around oxygen tanks can cause issues. My point: There is a lot of wastefull spending and overpriced stuff in government contracting, but sometimes there is more to the story than simply “Military spends $x00 for something that is $10 at Walmart…”
One big problem is how the government budgets. You generally have to spend all you money on an annual basis, and if you give money back that you saved, you risked getting less next year because “you didn’t need as much as you said last year…” So, come Q4, you go on a spending spree to spend whatever is left; trying to spend more is better than trying to spend less.
Samsung Is Bringing AirDrop-Style Sharing to Older Galaxy Devices
As spotted by Reddit users (via Tarun Vats on X), a Quick Share app update is rolling out via the Galaxy Store on older Samsung devices that appears to add support for AirDrop file sharing with Apple devices. Users report seeing the same new “Share with Apple devices” section we first saw on Galaxy S26 devices in the Settings app after updating Quick Share.
The update is reportedly showing up on Galaxy models ranging from the Galaxy S22 to last year’s Galaxy S25 series. The catch, however, is that the feature doesn’t seem to be working yet. It’s appearing on devices running One UI 8 as well as the One UI 8.5 beta, but enabling the toggle doesn’t activate the functionality for now.
Users say that turning on the feature doesn’t make their device visible to Apple devices, and no Apple devices show up in Quick Share either. It’s possible Samsung or Google still needs to enable it server-side, but it does confirm that broader rollout to older Galaxy devices is coming. The feature could arrive fully with the One UI 8.5 update.
I don’t know why I should care about limited compatibility for a subset of devices with another subset of devices. There’s some of everything in my home. I found a tool called LocalSend years ago that allows me to do mildly obnoxious data transfers between arbitrary devices regardless of platform.
Re:Stop blaming AI
Oracle has poured money into AI. Into the datacenter side even, you know the side the hyperscalers who don’t make a profit selling to the AI model companies who lose money selling the use of their models for less than it costs them to run them so those customers can also lose money.
It is everything to do with AI. The big silver lining of the whole AI bubble is that it just might destroy Oracle for the good of all mankind.
This is different than people using say Claude Code - spending $200 to get $2000 worth of compute is probably pretty good for as long as the subsidy chain lasts.