Production incidents have more than tripled as teams move from low to high AI adoption. At the same time, PR size is up 51%, bugs per PR are up 28%, and the code arriving for review is touching more files, carrying more complexity, and harder to reason about than before. This is what the quality side of the Acceleration Whiplash looks like. More code, moving faster, and that code is doing more. Which means that when something breaks, it breaks harder. AI is now in virtually every engineering organization, so this isn't a story about early adopters taking on risk. It's a story about the software running payroll, processing transactions, managing medical records, and keeping infrastructure online. The stakes are already there. The data is just making them visible. For engineering leaders already seeing this pattern: where is it showing up first in your environment: in review, in production, or somewhere earlier? #EngineeringLeadership #AIRisk #SoftwareReliability
About us
Faros is the system for running engineering with AI. We give engineering leaders visibility into how work operates across code, people, and systems, and control over how that work progresses through enforceable workflows and policy. This enables organizations to deploy AI effectively and improve engineering throughput with stronger cost efficiency.
- Website
-
https://www.faros.ai
External link for Faros
- Industry
- Software Development
- Company size
- 51-200 employees
- Headquarters
- San Francisco Bay Area
- Type
- Privately Held
- Founded
- 2019
- Specialties
- developer productivity, developer experience, engineering transformation, AI transformation, AI technology evaluation and impact, engineering metrics, AI/ML, devops, GitHub Copilot impact, engineering modernization, engineering excellence, and cloud
Locations
-
Primary
Get directions
San Francisco Bay Area, US
Employees at Faros
Updates
-
AI is now the primary author of code in most engineering organizations. Acceptance rates for AI-generated code have climbed from 20% to 60%. In most teams, it isn't assisting developers anymore; it’s leading them. Two years of telemetry across 22,000 developers shows what that shift actually produced: throughput up, roadmaps moving, epics finally closing. And at the same time, bugs up 54%, incidents per PR more than tripled, and 31% more code shipping with no review at all. That's the Acceleration Whiplash. The AI Engineering Report 2026 is out today. Read the research: https://lnkd.in/gj7vijEJ #AIEngineering #EngineeringLeadership #SoftwareEngineering #DeveloperProductivity
-
-
Faros was cited in TechCrunch in a piece on tokenmaxxing, the practice of maximizing AI token budgets as a proxy for developer productivity. Our data is part of a pattern the industry is starting to see clearly: more code is being written, but a disproportionate amount of it isn't sticking. The question worth asking isn't how many tokens your engineers are consuming. It's what happens to the code those tokens produce — in review, in production, and in the incident queue. https://lnkd.in/gCAf-_yj #AIEngineering #EngineeringLeadership
-
Proud to see our co-founder, Shubha Nabar featured by Microsoft for Startups. We turn coding AI into production-grade software engineering outcomes. Great to see that work recognized.
Excited to be featured in this Women’s History Month spotlight from Microsoft for Startups alongside such an inspiring group of founders and technology leaders. At Faros, we’re thinking deeply about what it means to build software in a world of humans and AI agents — and it’s energizing to be part of this broader movement shaping what comes next. https://lnkd.in/g3pqNzPf #WomenInTech #WomensHistoryMonth #MicrosoftForStartups #Startups
-
Autodesk just posted another stellar quarter: $1.96B in revenue and 38% non-GAAP operating margin 🔥 We’re proud to partner with their engineering organization as the visibility plane for continuous improvement — helping teams see how software gets built, find what’s slowing them down, and ship better, faster. Engineering visibility and control at enterprise scale. That’s what we do. See how Autodesk uses Faros 👇 https://lnkd.in/gUJZ6gik #EngineeringExcellence #DeveloperProductivity #Autodesk #FarosAI
-
Which AI models are developers currently choosing in 2026? We scoured Reddit, interviewed developers, and compiled a list of the top 5 front-runners: - GPT-5.2 (and GPT-5.2-Codex) - Claude Opus 4.5 - Gemini 3 Pro - Claude Sonnet 4.5 - Cursor’s Composer-1 Hear what developers say about each model’s strengths, limitations, and what they’re best for 👉 https://lnkd.in/d9wYDfXC
-
-
"What percentage of our code is AI-generated?" It's the obvious question. It's also the wrong one. Tracking AI-generated code volume can make sense for governance, like assessing repository risk or long-term maintainability. But using it to evaluate productivity brings back a metric we already learned to distrust. Lines of code was a poor proxy for developer productivity, and it’s just as misleading for AI. Plus, technical limitations make accurate measurement nearly impossible, and changes in code quality—along with their downstream impact—don't show up when you're just counting lines. If you want to understand AI’s real impact, the most useful signals are outcome-based: cycle time, quality, delivery velocity, and how reliably value reaches production. We put together a practical list of outcome-based metrics, ranked by how directly they inform business decisions: https://lnkd.in/gHiwu_7M
-
-
Which AI coding agents are developers actually choosing in 2026? We scoured Reddit, interviewed developers, and compiled a ranked list. 🚀 Front-runners: Cursor, Claude Code, Codex, GitHub Copilot, Cline ⚡ Runners-up (strong, but increasingly more niche): RooCode, Windsurf, Aider, Augment, JetBrains Junie, Gemini CLI 🌱 Emerging contenders to watch: AWS Kiro, Kilo Code, Zencoder What mattered most wasn’t the features. Instead, developers are considering: - “Will this burn my tokens?” → Token efficiency and price - “Does it actually make me faster?” → Productivity impact - “Can I trust the output?” → Code quality & hallucination control - “Does it understand my repo?” → Context window & repo understanding - “Where does my code go?” → Privacy, security & data control See the full landscape through developers’ eyes here 👉 https://lnkd.in/gv6muQfd Which AI coding tools are you using daily—and why? Did we miss one you think belongs in the top tier?
-
-
Be honest — is your engineering org really tracking all five DORA metrics reliably? (And yes, there are 5.) Most teams believe they are. Until someone asks why a metric changed… and no one can answer. Reliable DORA measurement is genuinely hard, especially in enterprise environments. Many tools were designed for simpler setups and struggle when reality gets messy: - Custom deployment processes break standard assumptions - Monorepos blur team-level attribution - Proxy metrics miss important context - AI adoption increases volatility - Newer metrics like rework rate remain inconsistent The result is dashboards that look precise, but don’t hold up when leaders try to use them for decisions. If your current DORA measurement software isn’t keeping up, here’s a practical selection guide to help evaluate other options for 2026: https://lnkd.in/gmqTZnJe BTW, what does good look like nowadays? With DORA transitioning to more complex classifications, we put together a simplified reference table. Use this to identify your biggest gaps and track improvement over time. Curious — which DORA metric has been hardest for your organization to measure accurately?
-