News

  • (2026.1) 🔥 The Falcon-H1-Tiny Series is released! A new family of extremely compact yet remarkably powerful language models: 90M English, 90M Coder, 90M Tool/Function Calling, 100M Multilingual, and 90M / 0.6B Reasoning. More insights can be found in our blogpost
  • (2026.1) The Learnable Multiplier paper is out, extending the μ-parameterization used in Falcon-H1 with learnable scaling for improved LLM performance, successfully applied in Falcon-H1-Tiny.
  • (2025.7) The technical report of Falcon-H1 is released.
  • (2025.6) The E2LM competition got accepted by NeurIPS’25, focused on early-stage training evaluations of LLMs. Registration is now open 🚀 The first prize is 6,000 USD🔥
  • (2025.5) 🔥 The Falcon-H1 Series is released! Featuring a novel hybrid Transformer–SSM architecture, Falcon-H1 sets a new state of the art (SoTA) across every scale—0.5B, 1.5B, 3B, 7B, and 34B, and outperforms leading Transformer models double its size. Blogpost 🔍
  • (2025.5) 🚀 We just released Falcon Edge! A series of powerful, universal, fine-tunable 1.58bit language models, check the blogpost here.
  • (2024.12) Falcon 3 is out! Five base models + their Instruct versions: 1B, 3B, 7B, 10B and a more advanced 7B Mamba model, check the blogpost here.
  • (2024.10) The technical report for Falcon Mamba 7B is released - check it here.
  • (2024.08) Falcon Mamba 7B is released - the first strong attention-free 7B language model. Try it out in HuggingFace, check the blogpost here.

Recent Publications

Preprint:

Contact