Installation
Install the Firecrawl CLI globally using npm:CLI
Authentication
Before using the CLI, you need to authenticate with your Firecrawl API key.Login
CLI
View Configuration
CLI
Logout
CLI
Commands
Scrape
Scrape a single URL and extract its content in various formats.CLI
Output Formats
CLI
Scrape Options
CLI
| Option | Short | Description |
|---|---|---|
--url <url> | -u | URL to scrape (alternative to positional argument) |
--format <formats> | -f | Output formats (comma-separated): markdown, html, rawHtml, links, images, screenshot, json |
--html | -H | Shortcut for --format html |
--only-main-content | Extract only main content | |
--wait-for <ms> | Wait time in milliseconds for JS rendering | |
--screenshot | Take a screenshot | |
--include-tags <tags> | HTML tags to include (comma-separated) | |
--exclude-tags <tags> | HTML tags to exclude (comma-separated) | |
--output <path> | -o | Save output to file |
--pretty | Pretty print JSON output |
Crawl
Crawl an entire website starting from a URL.CLI
Check Crawl Status
CLI
Crawl Options
CLI
| Option | Description |
|---|---|
--url <url> | URL to crawl (alternative to positional argument) |
--wait | Wait for crawl to complete |
--progress | Show progress indicator while waiting |
--poll-interval <seconds> | Polling interval (default: 5) |
--timeout <seconds> | Timeout when waiting |
--status | Check status of existing crawl job |
--limit <number> | Maximum pages to crawl |
--max-depth <number> | Maximum crawl depth |
--include-paths <paths> | Paths to include (comma-separated) |
--exclude-paths <paths> | Paths to exclude (comma-separated) |
--allow-subdomains | Include subdomains |
--allow-external-links | Follow external links |
--output <path> | Save output to file |
--pretty | Pretty print JSON output |
Map
Discover all URLs on a website quickly.CLI
Map Options
CLI
| Option | Description |
|---|---|
--url <url> | URL to map (alternative to positional argument) |
--limit <number> | Maximum URLs to discover |
--search <query> | Filter URLs by search query |
--sitemap <mode> | Sitemap handling: include, skip, only |
--include-subdomains | Include subdomains |
--ignore-query-parameters | Treat URLs with different params as same |
--json | Output as JSON |
--output <path> | Save output to file |
--pretty | Pretty print JSON output |
Search
Search the web and optionally scrape the results.CLI
Search Options
CLI
| Option | Description |
|---|---|
--limit <number> | Maximum results (default: 5, max: 100) |
--sources <sources> | Sources to search: web, images, news (comma-separated) |
--categories <categories> | Filter by category: github, research, pdf (comma-separated) |
--tbs <value> | Time filter: qdr:h (hour), qdr:d (day), qdr:w (week), qdr:m (month), qdr:y (year) |
--location <location> | Geo-targeting (e.g., “Berlin,Germany”) |
--country <code> | ISO country code (default: US) |
--scrape | Scrape search results |
--scrape-formats <formats> | Formats for scraped content (default: markdown) |
--only-main-content | Include only main content when scraping |
--json | Output as JSON |
--output <path> | Save output to file |
--pretty | Pretty print JSON output |
Credit Usage
Check your team’s credit balance and usage.CLI
Version
Display the CLI version.CLI
Global Options
These options are available for all commands:| Option | Short | Description |
|---|---|---|
--api-key <key> | -k | Override stored API key for this command |
--help | -h | Show help for a command |
--version | -V | Show CLI version |
Output Handling
The CLI outputs to stdout by default, making it easy to pipe or redirect:CLI
Examples
Quick Scrape
CLI
Full Site Crawl
CLI
Site Discovery
CLI
Research Workflow
CLI

