The United Arab Emirates is rolling out what may be the world's most ambitious government AI overhaul. Within two years, the country wants 50 percent of all government sectors, services, and processes to run on "agentic AI," systems that analyze, decide, and increasingly act on their own. Sheikh Mohammed bin Rashid Al Maktoum announced the plan on X.
The UAE says this would make it the first government in the world to lean on autonomous AI systems at this scale. The idea is to turn AI into an "executive partner" that improves services, speeds up decisions, and boosts efficiency. Every federal employee will be trained to work with AI. The goal, according to Sheikh Mohammed, is a government that is " faster, more responsive, and more impactful."
AI systems that make decisions on their own are still prone to errors, can amplify biases baked into their training data, and face little oversight in a country without democratic checks and with limited press freedom. The risks of government AI use are showing up elsewhere too, including in the US, where Claude maker Anthropic has raised concerns about potential mass surveillance.
China plans to block tech companies from accepting US capital without government approval, according to Bloomberg. China's National Development and Reform Commission (NDRC) has reportedly told several private companies in recent weeks to reject US funding in their financing rounds. Among those affected are AI startups Moonshot AI and Stepfun, as well as TikTok parent company ByteDance, according to people familiar with the matter.
The move was triggered by Meta's $2 billion acquisition of AI startup Manus, announced in late 2025. The deal prompted an investigation in Beijing over potential illegal foreign investments and technology exports. While Manus was registered in Singapore, its founders were Chinese. Critics in China accused the deal of handing valuable AI technology to a geopolitical rival. The new rules could further cut off China's tech sector from Western venture capital.
Trump science advisor says Chinese actors are copying American AI at massive scale
The US government says it has evidence of large-scale, industrial distillation campaigns targeting American frontier models, with China as the primary culprit. Now the Trump administration is moving to fight back.
A small group of unauthorized users gained access to Anthropic's new AI model Claude Mythos, Bloomberg reports. Anthropic considers Mythos powerful enough to enable dangerous cyberattacks, which is why the company only makes it available to select partners like Apple, Amazon, and Cisco through its "Project Glasswing" program.
The users, members of a private Discord channel, got in on the day of the announcement. They pulled it off using the access credentials of a member who works as a contractor for Anthropic, along with publicly available information from a data leak at AI startup Mercor. According to Bloomberg, the group didn't use Mythos for cyberattacks but for harmless tasks like building simple websites for testing.
The source says the group also has access to a number of other unreleased Anthropic AI models. The company says it's investigating the incident. So far, there's no indication that the access extended beyond the external contractor's environment or that Anthropic's own systems were compromised.
Meta is installing new surveillance software on its US employees' computers that captures mouse movements, clicks, and keystrokes to train AI models. That's according to Reuters, citing internal memos.
The tool, called Model Capability Initiative (MCI), runs on work-related apps and websites and occasionally takes screenshots. The goal is to build AI agents that can handle work tasks on their own, like navigating dropdown menus or using keyboard shortcuts. Meta spokesperson Andy Stone said the data won't be used for performance reviews and that sensitive content is protected.
As part of the Agent Transformation Accelerator initiative, CTO Andrew Bosworth announced that agents will eventually take over the bulk of the work. Meta plans to cut ten percent of its global workforce starting May 20. Legal experts like Valerio De Stefano of York University warn that these practices would likely run afoul of GDPR in Europe.
Turning someone else's photo into a comic with AI doesn't automatically violate copyright. That's the ruling from a German Higher Regional Court on April 2, 2026 (case no. I-20 W 2/26). An animal photographer who sells underwater dog shots sued a former business partner who had fed her photo of a diving dog into AI software and posted the comic-style result on its website.
The court denied the appeal, finding that the AI image doesn't copy the protectable elements of the original, such as framing, perspective, lighting, and sharpness. The motif and subject themselves aren't protected. The judges cited a recent European Court of Justice ruling that focuses on the recognizable adoption of specific creative elements rather than the overall impression.
The court also clarified that AI-generated works only qualify for copyright protection when a human makes recognizably creative decisions; picking an AI suggestion or typing generic prompts isn't enough. The decision aligns with earlier rulings from other German courts and echoes the position of the US Copyright Office.