Skip to main content

Installation

Install the Firecrawl CLI globally using npm:
CLI
# Install globally with npm
npm install -g firecrawl

# Or use npx without installing
npx firecrawl --help
If you are using in any AI agent like Claude Code, you can install the skill with:
npx skills add firecrawl/cli

Authentication

Before using the CLI, you need to authenticate with your Firecrawl API key.

Login

CLI
# Interactive login (opens browser or prompts for API key)
firecrawl login

# Login with browser authentication
firecrawl login --method browser

# Login with API key directly
firecrawl login --api-key fc-YOUR-API-KEY

# Or set via environment variable
export FIRECRAWL_API_KEY=fc-YOUR-API-KEY

View Configuration

CLI
# View current configuration
firecrawl config

# Or use the alias
firecrawl view-config

Logout

CLI
# Clear stored credentials
firecrawl logout

Commands

Scrape

Scrape a single URL and extract its content in various formats.
CLI
# Scrape a URL (default: markdown output)
firecrawl https://example.com

# Or use the explicit scrape command
firecrawl scrape https://example.com

Output Formats

CLI
# Get HTML output
firecrawl https://example.com --html

# Multiple formats (returns JSON)
firecrawl https://example.com --format markdown,html,links

# Available formats: markdown, html, rawHtml, links, screenshot, json

Scrape Options

CLI
# Extract only main content (removes navs, footers)
firecrawl https://example.com --only-main-content

# Wait for JavaScript rendering
firecrawl https://example.com --wait-for 3000

# Take a screenshot
firecrawl https://example.com --screenshot

# Include/exclude specific HTML tags
firecrawl https://example.com --include-tags article,main
firecrawl https://example.com --exclude-tags nav,footer

# Save output to file
firecrawl https://example.com -o output.md

# Pretty print JSON output
firecrawl https://example.com --format markdown,links --pretty
Available Options:
OptionShortDescription
--url <url>-uURL to scrape (alternative to positional argument)
--format <formats>-fOutput formats (comma-separated): markdown, html, rawHtml, links, images, screenshot, json
--html-HShortcut for --format html
--only-main-contentExtract only main content
--wait-for <ms>Wait time in milliseconds for JS rendering
--screenshotTake a screenshot
--include-tags <tags>HTML tags to include (comma-separated)
--exclude-tags <tags>HTML tags to exclude (comma-separated)
--output <path>-oSave output to file
--prettyPretty print JSON output

Crawl

Crawl an entire website starting from a URL.
CLI
# Start a crawl (returns job ID immediately)
firecrawl crawl https://example.com

# Wait for crawl to complete
firecrawl crawl https://example.com --wait

# Wait with progress indicator
firecrawl crawl https://example.com --wait --progress

Check Crawl Status

CLI
# Check crawl status using job ID
firecrawl crawl <job-id>

# Example with a real job ID
firecrawl crawl 550e8400-e29b-41d4-a716-446655440000

Crawl Options

CLI
# Limit crawl depth and pages
firecrawl crawl https://example.com --limit 100 --max-depth 3 --wait

# Include only specific paths
firecrawl crawl https://example.com --include-paths /blog,/docs --wait

# Exclude specific paths
firecrawl crawl https://example.com --exclude-paths /admin,/login --wait

# Include subdomains
firecrawl crawl https://example.com --allow-subdomains --wait

# Custom polling interval and timeout
firecrawl crawl https://example.com --wait --poll-interval 10 --timeout 300

# Save results to file
firecrawl crawl https://example.com --wait --pretty -o results.json
Available Options:
OptionDescription
--url <url>URL to crawl (alternative to positional argument)
--waitWait for crawl to complete
--progressShow progress indicator while waiting
--poll-interval <seconds>Polling interval (default: 5)
--timeout <seconds>Timeout when waiting
--statusCheck status of existing crawl job
--limit <number>Maximum pages to crawl
--max-depth <number>Maximum crawl depth
--include-paths <paths>Paths to include (comma-separated)
--exclude-paths <paths>Paths to exclude (comma-separated)
--allow-subdomainsInclude subdomains
--allow-external-linksFollow external links
--output <path>Save output to file
--prettyPretty print JSON output

Map

Discover all URLs on a website quickly.
CLI
# Discover all URLs on a website
firecrawl map https://example.com

# Output as JSON
firecrawl map https://example.com --json

# Limit number of URLs
firecrawl map https://example.com --limit 500

Map Options

CLI
# Filter URLs by search query
firecrawl map https://example.com --search "blog"

# Include subdomains
firecrawl map https://example.com --include-subdomains

# Control sitemap usage
firecrawl map https://example.com --sitemap include   # Use sitemap
firecrawl map https://example.com --sitemap skip      # Skip sitemap
firecrawl map https://example.com --sitemap only      # Only use sitemap

# Ignore query parameters (dedupe URLs)
firecrawl map https://example.com --ignore-query-parameters

# Save to file
firecrawl map https://example.com -o urls.txt
firecrawl map https://example.com --json --pretty -o urls.json
Available Options:
OptionDescription
--url <url>URL to map (alternative to positional argument)
--limit <number>Maximum URLs to discover
--search <query>Filter URLs by search query
--sitemap <mode>Sitemap handling: include, skip, only
--include-subdomainsInclude subdomains
--ignore-query-parametersTreat URLs with different params as same
--jsonOutput as JSON
--output <path>Save output to file
--prettyPretty print JSON output

Search the web and optionally scrape the results.
CLI
# Search the web
firecrawl search "web scraping tutorials"

# Limit results
firecrawl search "AI news" --limit 10

# Pretty print results
firecrawl search "machine learning" --pretty

Search Options

CLI
# Search specific sources
firecrawl search "AI" --sources web,news,images

# Search with category filters
firecrawl search "react hooks" --categories github
firecrawl search "machine learning" --categories research,pdf

# Time-based filtering
firecrawl search "tech news" --tbs qdr:h   # Last hour
firecrawl search "tech news" --tbs qdr:d   # Last day
firecrawl search "tech news" --tbs qdr:w   # Last week
firecrawl search "tech news" --tbs qdr:m   # Last month
firecrawl search "tech news" --tbs qdr:y   # Last year

# Location-based search
firecrawl search "restaurants" --location "Berlin,Germany" --country DE

# Search and scrape results
firecrawl search "documentation" --scrape --scrape-formats markdown

# Save to file
firecrawl search "firecrawl" --pretty -o results.json
Available Options:
OptionDescription
--limit <number>Maximum results (default: 5, max: 100)
--sources <sources>Sources to search: web, images, news (comma-separated)
--categories <categories>Filter by category: github, research, pdf (comma-separated)
--tbs <value>Time filter: qdr:h (hour), qdr:d (day), qdr:w (week), qdr:m (month), qdr:y (year)
--location <location>Geo-targeting (e.g., “Berlin,Germany”)
--country <code>ISO country code (default: US)
--scrapeScrape search results
--scrape-formats <formats>Formats for scraped content (default: markdown)
--only-main-contentInclude only main content when scraping
--jsonOutput as JSON
--output <path>Save output to file
--prettyPretty print JSON output

Credit Usage

Check your team’s credit balance and usage.
CLI
# View credit usage
firecrawl credit-usage

# Output as JSON
firecrawl credit-usage --json --pretty

Version

Display the CLI version.
CLI
firecrawl version
# or
firecrawl --version

Global Options

These options are available for all commands:
OptionShortDescription
--api-key <key>-kOverride stored API key for this command
--help-hShow help for a command
--version-VShow CLI version

Output Handling

The CLI outputs to stdout by default, making it easy to pipe or redirect:
CLI
# Pipe markdown to another command
firecrawl https://example.com | head -50

# Redirect to a file
firecrawl https://example.com > output.md

# Save JSON with pretty formatting
firecrawl https://example.com --format markdown,links --pretty -o data.json

Examples

Quick Scrape

CLI
# Get markdown content from a URL
firecrawl https://docs.firecrawl.dev

# Get HTML content
firecrawl https://example.com --html -o page.html

Full Site Crawl

CLI
# Crawl a docs site with limits
firecrawl crawl https://docs.example.com --limit 50 --max-depth 2 --wait --progress -o docs.json

Site Discovery

CLI
# Find all blog posts
firecrawl map https://example.com --search "blog" -o blog-urls.txt

Research Workflow

CLI
# Search and scrape results for research
firecrawl search "machine learning best practices 2024" --scrape --scrape-formats markdown --pretty