For Demis Hassabis, the path to AGI started in 1988 with an Amiga 500 and a game of Othello. 🕹️ His epiphany that software could act on our behalf remains at the heart of our work today as we apply the same logic to solving scientific grand challenges. Read more on @Fast Company → https://goo.gle/4sVmWkh
Google DeepMind
Research Services
London, London 1,535,459 followers
We're committed to solving intelligence, to advance science and benefit humanity.
About us
We’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority. Our long term aim is to solve intelligence, developing more general and capable problem-solving systems, known as artificial general intelligence (AGI). Guided by safety and ethics, this invention could help society find answers to some of the world’s most pressing and fundamental scientific challenges. We have a track record of breakthroughs in fundamental AI research, published in journals like Nature, Science, and more.Our programs have learned to diagnose eye diseases as effectively as the world’s top doctors, to save 30% of the energy used to keep data centres cool, and to predict the complex 3D shapes of proteins - which could one day transform how drugs are invented.
- Website
-
https://www.deepmind.google
External link for Google DeepMind
- Industry
- Research Services
- Company size
- 501-1,000 employees
- Headquarters
- London, London
- Type
- Privately Held
- Founded
- 2010
- Specialties
- Artificial Intelligence and Machine Learning
Locations
Employees at Google DeepMind
Updates
-
Google DeepMind reposted this
Are LLMs stubborn or oversensitive to pushback? Both — at once. Our new paper in Nature Machine Intelligence identifies two competing biases in how LLMs handle their own confidence. First: LLMs become more confident in their initial answers simply because they gave them before — a choice-supportive bias established in human cognition, but striking in a stateless model with no memory of having provided a confidence rating before. Second: when challenged, LLMs markedly overweight opposing advice, updating 2–3× more strongly than a Bayesian ideal observer — and changing their minds far more often than warranted. Notably, this is asymmetric — they don't comparably overweight advice that agrees with them, distinguishing this from simple sycophancy. These biases coexist, pull in opposite directions, and generalise across multiple models — from factual queries to math problems. Joint work with Google DeepMind and the UCL Institute of Cognitive Neuroscience. 📄 Open access: https://rdcu.be/feOjz
-
-
Decoupled DiLoCo is our latest approach helping train AI models across multiple distant data centers. This process normally relies on identical chips staying in near-perfect synchronization. If a single chip fails, the entire training run can stall. With Decoupled DiLoCo, we explored a way to train across a global network. Here are some of our results: 🔘 We trained a 12B parameter model simultaneously across four US regions - so we are no longer constrained by the size of a single centre. 🔘 The system seamlessly mixes older and newer chip generations without slowing down, unlocking more value from existing hardware. 🔘 If hardware breaks mid-run, it isolates the failure and keeps training. We look forward to continuing to evolve our systems into more resilient, useful tools - helping us develop the next generation of AI. Find out more → https://goo.gle/4mNE36q
-
-
Only 25% of organizations have moved AI into production at scale. We’re working to change that. 🛠️ Accenture, Bain & Company, Boston Consulting Group (BCG), Deloitte, and McKinsey & Company are combining our advanced research with their expertise to bring AI innovation to more industries responsibly. 🤝 Find out more → https://goo.gle/42oa37A
-
-
Deep Research and Deep Research Max are our latest autonomous research agents powered by Gemini 3.1 Pro. They can safely navigate both the web and your custom data, like internal docs and specialized financial information, to create professional-grade, fully cited reports. 🔵 Deep Research: Optimized for speed and efficiency. Perfect for interactive apps needing quicker responses. 🔵 Deep Research Max: It uses extra time to search and reason. Ideal for exhaustive context gathering and tasks happening in the background. Now featuring arbitrary MCP support to securely connect and analyze your own or third-party data. Plus, it’s our first research agent to natively generate presentation-ready visuals that bring data to life. Start building via the Gemini API → https://goo.gle/4tAJBDC
-
-
Google DeepMind reposted this
Announcing the AI for Science European Summer School https://lnkd.in/e3xeWXvY Join us in Athens, Greece on July 16 and 17, 2026 for an exciting two day program at the intersection of AI and Scientific Discovery. The workshop will be hosted at the beautiful Acropolis Museum. The summer school is organized by Hellenic Institute of Advanced Studies, sponsored by Google DeepMind and will feature a stellar lineup of leading researchers and innovators. We warmly invite applications from early career scientists and industry members eager to engage with cutting edge developments in AI for Science. 👉 Apply by May 10th https://lnkd.in/ecZYcjd7 🎉 A limited number of fellowships are available to cover travel and attendance costs for young scientists.
-
Gemini 3.1 Flash TTS is our most controllable text-to-speech model yet. With new Audio Tags, you can easily direct vocal style, delivery, and pace through text commands. Updates and features: 🔵 More natural sounding speech 🔵 Support for 70+ languages like Hindi, Japanese, and German 🔵 SynthID watermarking on all outputs Developers can start building in preview via the Gemini API and Google AI Studio, rolling out to enterprises in preview in Vertex AI. Rolling out to everyone in Google Vids. Find out more → https://goo.gle/4mrqnxy
-
We’re giving robots a better understanding of the physical world with Gemini Robotics-ER 1.6. 🤖 It’s significantly better at spatial and physical reasoning, meaning: 👉 It can use pointing to precisely identify objects ✅ Agents can intelligently choose where to retry a task or move forward 👓 Robots are able to read and interpret a variety of instruments. Gemini Robotics-ER 1.6 is also our safest robotics model yet, and able to recognize physical constraints like avoiding liquids or items over 20kg when following instructions. It is also 10% better at detecting human injury risks in videos. The model is available now on Google AI Studio, and the Gemini API. Find out more. ↓ https://goo.gle/4cuBh1f
-
What’s new in Gemma 4? Gemma 4 is our newest family of open models, bringing the same world-class research and technology from Gemini 3 directly to your local hardware. This means you can now run advanced reasoning, native vision and audio, and agentic tool-use on anything from high-end workstations to mobile phones. Learn more → https://goo.gle/4cb8LBE
-
Gemma 4 is here. 💻 We’ve built a new family of open models based on the same world class research and tech as Gemini 3. “Open” means the model weights are yours to download, customize, and run on your own hardware. ⚖️ Four sizes: High-performance versions for workstations (31B Dense & 26B MoE) and highly optimized “Edge” versions (E4B & E2B) built specifically for mobile. 🧠 Advanced reasoning: Capable of multi-step planning and deep logic with native vision and audio support. 🤖 Built for agents: Native tool use lets you build autonomous systems that can actually do things, like search databases or trigger APIs. 🔒 Apache 2.0 License: Complete flexibility to build, fine-tune, and deploy however you want. Start building with Gemma 4 now in Google AI Studio. You can also download the model weights from Hugging Face, Kaggle, or Ollama. Find out more → https://goo.gle/4cb8LBE
-