Semgrep’s cover photo
Semgrep

Semgrep

Software Development

San Francisco, California 18,885 followers

Semgrep is the leader in code security for builders, helping teams catch and fix real security issues before they ship.

About us

Semgrep is the leader in code security for builders. Teams catch, flag, and fix real issues before they ship, powered by security that learns as you build. Built for builders and trusted by security, the platform unifies SAST, SCA, and secrets scanning, embedding protection directly into the development workflow so security begins where code is written and lives where developers work. Semgrep combines deterministic static analysis with AI reasoning to power detection, triage, and remediation. This approach helps teams uncover real vulnerabilities, prioritize reachable risks, and fix issues faster. Customers report up to 80% fewer false positives across Code and Supply Chain, with 95% of findings validated by security reviewers across more than 6 million results. Founded in San Francisco, Semgrep is backed by Menlo Ventures, Felicis Ventures, Lightspeed Venture Partners, Redpoint Ventures, and Sequoia Capital. It is recognized by Gartner in Application Security Testing and trusted by leading organizations, including Snowflake, Dropbox, and Figma. Learn more at semgrep.dev.

Website
https://semgrep.dev
Industry
Software Development
Company size
201-500 employees
Headquarters
San Francisco, California
Type
Privately Held
Founded
2017

Locations

Employees at Semgrep

Updates

  • Semgrep reposted this

    Great turnout at our latest OWASP LA event thank you to everyone who joined us We had an awesome session with Erik Buchanan Head of AI Engineering at Semgrep diving into how AI is changing the way we approach application security. A few key takeaways from the session: → Traditional SAST is strong for things like SQLi and XSS, but struggles with deeper logic vulnerabilities → AI models help uncover issues like IDOR, broken authorization, and business logic flaws → The real power comes from combining both not relying on AI alone As highlighted during the talk, multimodal analysis (AI + program analysis) can significantly improve detection quality with higher true positives and fewer false positives Beyond the technical side, it was great to see the community come together from the conversations to the networking and everything in between. A special thank you to William L. for his continued support this is the second time he’s generously contributed a $100 donation to the OWASP LA community. We truly appreciate your support Huge thank you to Semgrep for sponsoring the event and supporting the OWASP LA community 📸 Sharing a few moments from the event below If you’re working in AppSec, AI, or security engineering, we’d love to see you at our next meetup. #OWASP #AppSec #CyberSecurity #AI #LosAngeles #Semgrep

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • What if upgrading and securing your codebase could happen automatically? In this hands-on workshop, we’ll walk through how Semgrep uses deep code understanding and AI to: 🟢 Identify upgrade opportunities 🟢 Recommend targeted fixes 🟢 Automatically remediate issues at scale Join Jamie Reid live on April 29. 📅 8AM PT / 4PM UTC Register here 👉https://lnkd.in/gged5mUM

    • No alternative text description for this image
  • A few NPM packages used in agentic AI workflows were compromised to run malicious payloads with a postinstall hook. * pgserve is used to embed a PostgreSQL server which can be used for integration testing, local development, and with pgvector built-in also used for AI applications that need AI Agent memory or RAG * @automagik/genie is used to dispatch parallel agents with a shared context and compose workflows Find a new rule added from Semgrep Advisories to check whether or not you have used these packages in your codebase: https://lnkd.in/gA9BcJmB

    • No alternative text description for this image
  • Excited to see that the Replit Security Agent, powered by Semgrep, is now available. Replit's Security Agent is a great example of what's possible when you pair the contextual reasoning of LLMs with the determinism and program analysis capabilities of Semgrep. We're excited to see this combination in the hands of the builder community.

    View organization page for Replit

    163,612 followers

    Meet the Replit Security Agent - providing comprehensive app security reviews in minutes. You also get $5 in credits to get started for a limited time. Security Agent’s hybrid static analysis and AI-scanning approach is first of its kind: - Acts on custom threat model to review full codebase - Resolves vulnerabilities in parallel using background tasks - Reduces false positives by 90% Powered by Semgrep+ HoundDog.ai. Keep vibe coding safely 🔒

  • One of the biggest takeaways from our latest workshop, 'Responding to Emergent Supply Chain Threats with Semgrep,' presented by Mehdi Mhamedi: Your CI isn't just a pipeline. It’s part of your attack surface. We need to start treating it with the same level of security as a production environment because it’s becoming a massive threat vector. Check out his tips here 👇

  • Semgrep reposted this

    Building secure systems today isn’t just about finding bugs. It’s about understanding how everything connects logic, APIs, and user flows. That’s exactly what we’re diving into at our next OWASP Los Angeles event Join us for an in person session with Erik Buchanan Head of AI Engineering at Semgrep as we explore how AI powered multimodal analysis is helping uncover complex vulnerabilities that traditional tools often miss. 💡 We’ll discuss: • The real strengths (and limits) of LLM powered security analysis • How AI agents + program analysis can work together effectively • Why combining approaches leads to better vulnerability detection Whether you're in AppSec, AI, or development, this is a great chance to learn, connect, and be part of the local security community. 📍 Villas at Playa Vista, Malibu 📅 Wednesday, April 22 🕠 5:30 PM – 8:30 PM PDT 🎟️ Register here: https://luma.com/ixu7q6pe Huge thanks to Semgrep for sponsoring and supporting the OWASP LA community 🙌 #OWASP #AppSec #CyberSecurity #AI #LosAngeles #TechEvents

    • No alternative text description for this image
  • The most important part of the Mythos news isn’t what most people seemed to focus on. There are some myths, some facts, and a lot of fear as the industry tries to to wrap our heads around uncertainty. More thoughts about the tension and anxiety shared by Semgrep's CEO in his latest post.

    Anthropic's Mythos has created a divide between security leaders, exploit developers, and the general AI enthusiast community over the facts of what has been achieved and the correct interpretation of those facts. Hype aside, Mythos is a significant advancement and it's not easily replicable with existing free/flagship models today. The trend of advanced models has been giving offensive teams a big advantage, so I'm glad that Anthropic and OpenAI are gating model access for now. This has implications for our product and for practitioners, who will likely need to increase AppSec spend to prevent attackers from finding 0-days first. First, rebutting some of the dismissive takes: 1. "The vulnerabilities Anthropic showed aren't a big deal, because humans can find them too." But humans didn't. Hindsight is 20/20, and with vulnerabilities – which are exponentially distributed in time – it is statistically harder to find a bug in old code vs. new code. Finding a 20-year old bug is much harder than finding a bug from yesterday. Any fully automated capability that can find new bugs of this nature and produce exploits for them at this scale is novel; the only comparable capability in recent memory is fuzzing. 2. "Nothing is new aside from the labs just spending money to find bugs; we know there are a ton of bugs in these pieces of software, people just don't care about spending the money to find them." The Mythos post was the first time Anthropic released the overall cost to discover a vulnerability and build an exploit in one project: $20K. I'm not in the market for exploits, but my sense is that's within striking distance of market price. And models will only get cheaper, whereas (human) labor costs will rise. Even more persuasive to me than the Anthropic numbers (which are only inference costs, not other human analysis time) is that small teams are putting up impressive numbers with existing models, like the "100+ kernel bugs in 30 days" research from two researchers: $4/bug (https://lnkd.in/g9f7kb6E) 3. "Mythos isn't a big deal; I gave the vulnerable examples with a description of the vulnerability to this cheaper model and it found them too." Finding the same vulnerability after the fact doesn't mean much without an understanding of false positive rates, and also how generalizable your technique was. (Also, Mythos' biggest advance is in exploit generation rather than vulnerability detection–see below). My colleague Kurt Boberg just published an analysis (https://lnkd.in/grK6EMtF) that more clearly illustrates how few models can find the vulnerabilities discussed in the Mythos blog post without being given very strong hints. --> Read the full post below, including how we accidentally had a role in Cal.com going closed source(!)

  • Are you tired of sorting through SCA alerts for dependencies your code doesn't even use?  Rust is great for memory safety, but SCA noise is still a massive headache. We finally rolled out reachability coverage for Rust. That means you only get an alert if your code is actually hitting the vulnerable part of a dependency. If you’re tired of triaging CVEs that aren't even exploitable, this is for you.👇  https://lnkd.in/gb83yMfv

    • No alternative text description for this image

Similar pages

Browse jobs

Funding

Semgrep 4 total rounds

Last Round

Series D

US$ 100.0M

See more info on crunchbase