Ad
Skip to content
Read full article about: Claude's Excel and PowerPoint add-ins now share context across apps

Anthropic is updating its Claude add-ins for Excel and PowerPoint with shared context, reusable workflows, and broader cloud support.

Anthropic is adding three new features to its Claude for Excel and Claude for PowerPoint add-ins. The two add-ins now share conversation context, so Claude can read cell values, write formulas, and edit slides in a single session without users having to repeat information.

The company is also introducing what it calls Skills, reusable workflows that teams can share as one-click actions for tasks like financial model reviews or deck analysis. A preinstalled starter set covers common use cases.

Both add-ins are now available through Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry, letting companies pick the cloud provider that works best for them. All features are available to paying users on Mac and Windows.

Many of these capabilities are already built into the Claude app itself, particularly in Cowork mode, which is now also part of Microsoft's Copilot.

Ad
Read full article about: Grammarly's AI writing tips claim inspiration from experts who never agreed to participate

Grammarly is apparently using the names of journalists and authors without permission for an AI feature called "Expert Review." The feature offers writing tips that are supposedly "inspired" by experts like Stephen King or Neil deGrasse Tyson. Even people who have already died, such as Carl Sagan, are reportedly included. As The Verge, Platformer, and Wired report, the feature also lists numerous tech journalists, including Verge editor-in-chief Nilay Patel and other editors. None of them were reportedly asked beforehand.

Screenshot: Grammarly Expert Review-Panel mit AI-Schreibvorschlägen von Technologie- und Stil-Experten.
The Expert Review panel in Grammarly provides context-based writing recommendations.

After the backlash, Grammarly reportedly offered only an opt-out option via email - no apology. Alex Gay, vice president of product marketing at parent company Superhuman, said the feature never claimed direct involvement from the experts. According to The Verge, some of the feature's source links pointed to spam sites or completely unrelated content. Expert descriptions also contained outdated job titles. The AI suggestions show up in Google Docs looking like real user comments, which can easily mislead people.

Read full article about: Anthropic launches internal think tank to study AI's impact on society and security

Anthropic has launched the "Anthropic Institute," an internal think tank dedicated to studying how powerful AI affects society, the economy, and security. The institute will be led by co-founder Jack Clark, who is taking on a new role as "Head of Public Benefit."

The institute plans to research how AI is transforming jobs, what new risks emerge from misuse, what "values" AI systems express, and how humans can maintain control over self-improving AI systems.

The team consists of around 30 people drawn from three existing research groups: the Frontier Red Team, the Societal Impacts team, and the economics research team. Early hires include Matt Botvinick (formerly Google DeepMind), Anton Korinek (University of Virginia), and Zoe Hitzig (previously at OpenAI).

The launch comes at a turbulent time for the company. Anthropic has sued 17 federal agencies and the Executive Office of the President after being classified as a supply chain risk. According to The Verge, Clark said he has "no concerns" about research funding. Anthropic is also opening an office in Washington, D.C.

Ad

An AI agent hacked McKinsey's internal AI platform in two hours using a decades-old technique

Security firm Codewall turned an offensive AI agent loose on McKinsey’s internal AI platform Lilli, a system used by over 43,000 employees for strategy work, client research, and document analysis. No credentials, no insider knowledge, no human assistance. Within two hours, the agent had full read and write access to the production database.

Read full article about: Amazon gets court order blocking Perplexity's AI shopping agent

A federal court in San Francisco has granted Amazon an injunction against AI startup Perplexity, barring it from using its AI browser agent Comet to make purchases on Amazon.

Amazon sued Perplexity in November, accusing the startup of fraud because Comet didn't disclose when it was shopping on behalf of a real person and ignored Amazon's demands to stop. The case raises a growing legal question: how should courts handle AI agents taking on complex tasks like online shopping?

Judge Maxine Chesney ruled that Amazon presented strong evidence that Perplexity was accessing users' password-protected accounts with their permission but without Amazon's authorization. Perplexity must also delete any collected Amazon data and has one week to appeal.

There's an interesting wrinkle here: Amazon recently became a major investor in OpenAI, which also sees product research and online shopping as key AI chat features. So far, though, OpenAI reportedly hasn't cracked direct checkout in its chat interface. Amazon may be positioning itself to step in and own that piece of the puzzle.

Ad
Read full article about: ChatGPT now explains math and physics with interactive visualizations

OpenAI is rolling out dynamic visual explanations for more than 70 math and science concepts in ChatGPT. Users can tweak variables in real time and see the effects on graphs and formulas instantly. For now, the topics are geared mainly toward high school and college students, covering things like binomial squares, exponential decay, Ohm's law, compound interest, and trigonometric identities.

According to OpenAI, the interactive explanations are available now to all logged-in users worldwide, regardless of their subscription plan. Over time, OpenAI plans to expand the learning modules to cover additional subjects.