Overview
The ScrapeGraphAI MCP Server is a production-ready Model Context Protocol (MCP) server that connects Large Language Models (LLMs) to the ScrapeGraph AI API. It enables AI assistants like Claude and Cursor to perform AI-powered web scraping, research, and crawling directly through natural language interactions.⭐ Star us on GitHub
If this server is helpful, a star goes a long way. Thanks!
What is MCP?
The Model Context Protocol (MCP) is a standardized way for AI assistants to access external tools and data sources. By using the ScrapeGraphAI MCP Server, your AI assistant gains access to powerful web scraping capabilities without needing to write code.Key Features
18 Powerful Tools
Scrape, extract, search, crawl, generate schemas, monitor scheduled jobs (with activity polling), and manage your account
Remote & Local
Use the hosted HTTP endpoint or run locally via Python
Universal Compatibility
Works with Cursor, Claude Desktop, and any MCP-compatible client
Production Ready
Robust error handling, timeouts, and reliability tested in production
Available Tools
The MCP server exposes the following tools via API v2:| Tool | Description |
|---|---|
| markdownify | Convert webpages to clean markdown (POST /scrape) |
| scrape | Fetch page content in any single format: markdown, html, screenshot, branding, links, images, summary (POST /scrape) |
| smartscraper | AI-powered structured extraction from a URL (POST /extract) |
| searchscraper | Search the web and extract structured results (POST /search) |
| smartcrawler_initiate | Start async multi-page crawl — markdown, html, links, images, summary, branding, or screenshot (POST /crawl) |
| smartcrawler_fetch_results | Poll crawl results (GET /crawl/:id) |
| crawl_stop | Stop a running crawl job (POST /crawl/:id/stop) |
| crawl_resume | Resume a stopped crawl job (POST /crawl/:id/resume) |
| generate_schema | Generate or augment a JSON Schema from a prompt (POST /schema) |
| credits | Check your credit balance (GET /credits) |
| sgai_history | Browse request history with pagination (GET /history) |
| monitor_create | Create a scheduled extraction job (POST /monitor) |
| monitor_list | List all monitors (GET /monitor) |
| monitor_get | Get monitor details (GET /monitor/:id) |
| monitor_pause | Pause a running monitor (POST /monitor/:id/pause) |
| monitor_resume | Resume a paused monitor (POST /monitor/:id/resume) |
| monitor_delete | Delete a monitor (DELETE /monitor/:id) |
| monitor_activity | Poll tick history for a monitor with pagination (GET /monitor/:id/activity) |
Removed from v1:
sitemap, agentic_scrapper, markdownify_status, smartscraper_status (no v2 API equivalents).Quick Start
Get Your API Key
Create an account and copy your API key from the ScrapeGraph Dashboard
Choose Your Client
Select your preferred AI assistant: Cursor or Claude Desktop
Setup Guides
Cursor Setup
Configure ScrapeGraph MCP in Cursor (remote-first)
Claude Desktop Setup
Configure ScrapeGraph MCP in Claude Desktop (remote-first)
Recommended: Remote HTTP Endpoint
The easiest way to get started is using our hosted MCP endpoint:Local Installation
Prefer running locally? You can install the Python package and run it via stdio. This gives you more control and doesn’t require internet connectivity for the MCP connection itself.The remote endpoint is recommended for most users as it’s simpler to set up and maintain.
Use Cases
- Research & Analysis - Extract data from multiple sources for research
- Content Aggregation - Collect and structure content from websites
- Market Intelligence - Monitor competitors and market trends
- Lead Generation - Extract contact information and company data
- Data Collection - Build datasets from web sources
Next Steps
- Read the detailed setup guide for Cursor
- Read the detailed setup guide for Claude Desktop
- Explore the full MCP Server documentation for advanced features
Ready to Start?
Choose your client and start scraping with AI!