Skyvern's MCP server implementation helps connect your AI Applications to the browser. This allows your AI applications to do things like: Fill out forms, download files, research information on the web, and more.
You can connect your MCP-enabled applications to Skyvern in two ways:
- Local Skyvern Server
- Use your favourite LLM to power Skyvern
- Skyvern Cloud
- Create an account at app.skyvern.com
- Get the API key from the settings page which will be used for setup
⚠️ REQUIREMENT: Skyvern only runs in Python 3.11 environment today⚠️
-
Install Skyvern
pip install skyvern
-
Configure Skyvern Run the setup wizard which will guide you through the configuration process. You can connect to either Skyvern Cloud or a local version of Skyvern.
skyvern init
-
(Optional) Launch the Skyvern Server. Only required in local mode
skyvern run server
claude_mcp_demo.mp4
cursor_search_jobs.mp4
windsurf_demo_fast.mp4
skyvern init helps configure the following applications for you:
- Cursor
- Windsurf
- Claude Desktop
- Your custom MCP App?
Use the following config if you want to set up Skyvern for any other MCP-enabled application
{
"mcpServers": {
"Skyvern": {
"env": {
"SKYVERN_BASE_URL": "https://api.skyvern.com", # "http://localhost:8000" if running locally
"SKYVERN_API_KEY": "YOUR_SKYVERN_API_KEY" # find the local SKYVERN_API_KEY in the .env file after running `skyvern init` or in your Skyvern Cloud console
},
"command": "PATH_TO_PYTHON",
"args": [
"-m",
"skyvern",
"run",
"mcp"
]
}
}
}Glama's "release" flow is different from publishing the package to PyPI or the official MCP Registry. For Glama, you need a runnable server container so Glama can boot the MCP server, inspect the tool schema, and publish an installable release in their directory.
Use the dedicated Dockerfile in this directory for that flow.
The root Dockerfile is for the full Skyvern app stack and
starts python -m skyvern.forge, which is the wrong runtime for an MCP-only
Glama release.
Recommended Glama setup:
- Claim the server in Glama. This repository already includes
glama.json, so authorized maintainers can claim theSkyvern-AI/skyvernentry. - In Glama's Dockerfile admin page, point the build to
Dockerfile.glama. - Keep the default command unless Glama explicitly asks for HTTP transport.
The image defaults to
python -m skyvern run mcpover stdio. - If you want the hosted Glama release to use Skyvern Cloud browser sessions,
add a real
SKYVERN_API_KEYsecret in Glama. Otherwise the container boots in local embedded mode, which is enough for inspection but not ideal for cloud-backed browser sessions. - Deploy, wait for inspection to pass, then use Glama's "Make Release" action in the server admin UI.
If you are also publishing to the official MCP Registry, treat that as a
separate step. The official registry uses package metadata and server.json;
Glama releases are container-based.