# Scrapfly Documentation

## Table of Contents

### Dashboard

- [Intro](https://scrapfly.io/docs)
- [Project](https://scrapfly.io/docs/project)
- [Account](https://scrapfly.io/docs/account)
- [Workspace & Team](https://scrapfly.io/docs/workspace-and-team)
- [Billing](https://scrapfly.io/docs/billing)

### Products

#### MCP Server

- [Getting Started](https://scrapfly.io/docs/mcp/getting-started)
- [Tools & API Spec](https://scrapfly.io/docs/mcp/tools)
- [Authentication](https://scrapfly.io/docs/mcp/authentication)
- [Examples & Use Cases](https://scrapfly.io/docs/mcp/examples)
- [FAQ](https://scrapfly.io/docs/mcp/faq)
##### Integrations

- [Overview](https://scrapfly.io/docs/mcp/integrations)
- [Claude Desktop](https://scrapfly.io/docs/mcp/integrations/claude-desktop)
- [Claude Code](https://scrapfly.io/docs/mcp/integrations/claude-code)
- [ChatGPT](https://scrapfly.io/docs/mcp/integrations/chatgpt)
- [Cursor](https://scrapfly.io/docs/mcp/integrations/cursor)
- [Cline](https://scrapfly.io/docs/mcp/integrations/cline)
- [Windsurf](https://scrapfly.io/docs/mcp/integrations/windsurf)
- [Zed](https://scrapfly.io/docs/mcp/integrations/zed)
- [Roo Code](https://scrapfly.io/docs/mcp/integrations/roo-code)
- [VS Code](https://scrapfly.io/docs/mcp/integrations/vscode)
- [LangChain](https://scrapfly.io/docs/mcp/integrations/langchain)
- [LlamaIndex](https://scrapfly.io/docs/mcp/integrations/llamaindex)
- [CrewAI](https://scrapfly.io/docs/mcp/integrations/crewai)
- [OpenAI](https://scrapfly.io/docs/mcp/integrations/openai)
- [n8n](https://scrapfly.io/docs/mcp/integrations/n8n)
- [Make](https://scrapfly.io/docs/mcp/integrations/make)
- [Zapier](https://scrapfly.io/docs/mcp/integrations/zapier)
- [Vapi AI](https://scrapfly.io/docs/mcp/integrations/vapi)
- [Agent Builder](https://scrapfly.io/docs/mcp/integrations/agent-builder)
- [Custom Client](https://scrapfly.io/docs/mcp/integrations/custom-client)


#### Web Scraping API

- [Getting Started](https://scrapfly.io/docs/scrape-api/getting-started)
- [API Specification]()
- [Monitoring](https://scrapfly.io/docs/monitoring)
- [Customize Request](https://scrapfly.io/docs/scrape-api/custom)
- [Debug](https://scrapfly.io/docs/scrape-api/debug)
- [Anti Scraping Protection](https://scrapfly.io/docs/scrape-api/anti-scraping-protection)
- [Proxy](https://scrapfly.io/docs/scrape-api/proxy)
- [Proxy Mode](https://scrapfly.io/docs/scrape-api/proxy-mode)
- [Proxy Mode - Screaming Frog](https://scrapfly.io/docs/scrape-api/proxy-mode/screaming-frog)
- [Proxy Mode - Apify](https://scrapfly.io/docs/scrape-api/proxy-mode/apify)
- [(Auto) Data Extraction](https://scrapfly.io/docs/scrape-api/extraction)
- [Javascript Rendering](https://scrapfly.io/docs/scrape-api/javascript-rendering)
- [Javascript Scenario](https://scrapfly.io/docs/scrape-api/javascript-scenario)
- [SSL](https://scrapfly.io/docs/scrape-api/ssl)
- [DNS](https://scrapfly.io/docs/scrape-api/dns)
- [Cache](https://scrapfly.io/docs/scrape-api/cache)
- [Batch (Multi-URL Scraping)](https://scrapfly.io/docs/scrape-api/batch)
- [Session](https://scrapfly.io/docs/scrape-api/session)
- [Webhook](https://scrapfly.io/docs/scrape-api/webhook)
- [Schedule](https://scrapfly.io/docs/scrape-api/schedule)
- [Screenshot](https://scrapfly.io/docs/scrape-api/screenshot)
- [Errors](https://scrapfly.io/docs/scrape-api/errors)
- [Timeout](https://scrapfly.io/docs/scrape-api/understand-timeout)
- [Throttling](https://scrapfly.io/docs/throttling)
- [Troubleshoot](https://scrapfly.io/docs/scrape-api/troubleshoot)
- [Billing](https://scrapfly.io/docs/scrape-api/billing)
- [FAQ](https://scrapfly.io/docs/scrape-api/faq)

#### Crawler API

- [Getting Started](https://scrapfly.io/docs/crawler-api/getting-started)
- [API Specification]()
- [Retrieving Results](https://scrapfly.io/docs/crawler-api/results)
- [WARC Format](https://scrapfly.io/docs/crawler-api/warc-format)
- [Data Extraction](https://scrapfly.io/docs/crawler-api/extraction-rules)
- [Webhook](https://scrapfly.io/docs/crawler-api/webhook)
- [Schedule](https://scrapfly.io/docs/crawler-api/schedule)
- [Billing](https://scrapfly.io/docs/crawler-api/billing)
- [Errors](https://scrapfly.io/docs/crawler-api/errors)
- [Troubleshoot](https://scrapfly.io/docs/crawler-api/troubleshoot)
- [FAQ](https://scrapfly.io/docs/crawler-api/faq)

#### Screenshot API

- [Getting Started](https://scrapfly.io/docs/screenshot-api/getting-started)
- [API Specification]()
- [Accessibility Testing](https://scrapfly.io/docs/screenshot-api/accessibility)
- [Webhook](https://scrapfly.io/docs/screenshot-api/webhook)
- [Schedule](https://scrapfly.io/docs/screenshot-api/schedule)
- [Billing](https://scrapfly.io/docs/screenshot-api/billing)
- [Errors](https://scrapfly.io/docs/screenshot-api/errors)

#### Extraction API

- [Getting Started](https://scrapfly.io/docs/extraction-api/getting-started)
- [API Specification]()
- [Rules Template](https://scrapfly.io/docs/extraction-api/rules-and-template)
- [LLM Extraction](https://scrapfly.io/docs/extraction-api/llm-prompt)
- [AI Auto Extraction](https://scrapfly.io/docs/extraction-api/automatic-ai)
- [Webhook](https://scrapfly.io/docs/extraction-api/webhook)
- [Billing](https://scrapfly.io/docs/extraction-api/billing)
- [Errors](https://scrapfly.io/docs/extraction-api/errors)
- [FAQ](https://scrapfly.io/docs/extraction-api/faq)

#### Proxy Saver

- [Getting Started](https://scrapfly.io/docs/proxy-saver/getting-started)
- [Fingerprints](https://scrapfly.io/docs/proxy-saver/fingerprints)
- [Optimizations](https://scrapfly.io/docs/proxy-saver/optimizations)
- [SSL Certificates](https://scrapfly.io/docs/proxy-saver/certificates)
- [Protocols](https://scrapfly.io/docs/proxy-saver/protocols)
- [Pacfile](https://scrapfly.io/docs/proxy-saver/pacfile)
- [Secure Credentials](https://scrapfly.io/docs/proxy-saver/security)
- [Billing](https://scrapfly.io/docs/proxy-saver/billing)

#### Cloud Browser API

- [Getting Started](https://scrapfly.io/docs/cloud-browser-api/getting-started)
- [Proxy & Geo-Targeting](https://scrapfly.io/docs/cloud-browser-api/proxy)
- [Unblock API](https://scrapfly.io/docs/cloud-browser-api/unblock)
- [Captcha Solver](https://scrapfly.io/docs/cloud-browser-api/captcha-solver)
- [File Downloads](https://scrapfly.io/docs/cloud-browser-api/file-downloads)
- [Session Resume](https://scrapfly.io/docs/cloud-browser-api/session-resume)
- [Human-in-the-Loop](https://scrapfly.io/docs/cloud-browser-api/human-in-the-loop)
- [Debug Mode](https://scrapfly.io/docs/cloud-browser-api/debug-mode)
- [Bring Your Own Proxy](https://scrapfly.io/docs/cloud-browser-api/bring-your-own-proxy)
- [Browser Extensions](https://scrapfly.io/docs/cloud-browser-api/extensions)
- [Native Browser MCP](https://scrapfly.io/docs/cloud-browser-api/mcp)
- [DevTools Protocol](https://scrapfly.io/docs/cloud-browser-api/cdp-reference)
##### Integrations

- [Puppeteer](https://scrapfly.io/docs/cloud-browser-api/puppeteer)
- [Playwright](https://scrapfly.io/docs/cloud-browser-api/playwright)
- [Selenium](https://scrapfly.io/docs/cloud-browser-api/selenium)
- [Vercel Agent Browser](https://scrapfly.io/docs/cloud-browser-api/agent-browser)
- [Browser Use](https://scrapfly.io/docs/cloud-browser-api/browser-use)
- [Stagehand](https://scrapfly.io/docs/cloud-browser-api/stagehand)

- [Billing](https://scrapfly.io/docs/cloud-browser-api/billing)
- [Errors](https://scrapfly.io/docs/cloud-browser-api/errors)


### Tools

- [Antibot Detector](https://scrapfly.io/docs/tools/antibot-detector)

### SDK

- [Golang](https://scrapfly.io/docs/sdk/golang)
- [Python](https://scrapfly.io/docs/sdk/python)
- [Rust](https://scrapfly.io/docs/sdk/rust)
- [TypeScript](https://scrapfly.io/docs/sdk/typescript)
- [Scrapy](https://scrapfly.io/docs/sdk/scrapy)

### Integrations

- [Getting Started](https://scrapfly.io/docs/integration/getting-started)
- [LangChain](https://scrapfly.io/docs/integration/langchain)
- [LlamaIndex](https://scrapfly.io/docs/integration/llamaindex)
- [CrewAI](https://scrapfly.io/docs/integration/crewai)
- [Zapier](https://scrapfly.io/docs/integration/zapier)
- [Make](https://scrapfly.io/docs/integration/make)
- [n8n](https://scrapfly.io/docs/integration/n8n)

### Academy

- [Overview](https://scrapfly.io/academy)
- [Web Scraping Overview](https://scrapfly.io/academy/scraping-overview)
- [Tools](https://scrapfly.io/academy/tools-overview)
- [Reverse Engineering](https://scrapfly.io/academy/reverse-engineering)
- [Static Scraping](https://scrapfly.io/academy/static-scraping)
- [HTML Parsing](https://scrapfly.io/academy/html-parsing)
- [Dynamic Scraping](https://scrapfly.io/academy/dynamic-scraping)
- [Hidden API Scraping](https://scrapfly.io/academy/hidden-api-scraping)
- [Headless Browsers](https://scrapfly.io/academy/headless-browsers)
- [Hidden Web Data](https://scrapfly.io/academy/hidden-web-data)
- [JSON Parsing](https://scrapfly.io/academy/json-parsing)
- [Data Processing](https://scrapfly.io/academy/data-processing)
- [Scaling](https://scrapfly.io/academy/scaling)
- [Walkthrough Summary](https://scrapfly.io/academy/walkthrough-summary)
- [Scraper Blocking](https://scrapfly.io/academy/scraper-blocking)
- [Proxies](https://scrapfly.io/academy/proxies)

---

# Native Browser MCP

 Scrapium natively integrates MCP: the browser itself is the MCP server. Websites register tools via `navigator.modelContext`, and the browser exposes them through the WebMCP CDP domain. Enable it with `enable_mcp=true` to get a streamable-HTTP MCP endpoint that any AI agent can connect to for structured tool discovery and invocation. No external server required.

 

 1. [Cloud Browser](https://scrapfly.io/docs/cloud-browser-api/getting-started)
2. Native Browser MCP
 
  Scrapium natively integrates the [Model Context Protocol](https://modelcontextprotocol.io/) (MCP). **The browser itself is the MCP server.** Websites register tools via the `navigator.modelContext` API, and the browser exposes them through the [WebMCP CDP domain](https://scrapfly.io/docs/cloud-browser-api/cdp-reference/WebMCP). When enabled on a Cloud Browser session, Scrapfly exposes a streamable-HTTP MCP endpoint that AI agents can connect to directly, no external server required.

## What is WebMCP?

 [WebMCP](https://developer.chrome.com/blog/webmcp-epp) is a proposed web standard developed jointly by Google and Microsoft under the [W3C Web Machine Learning Community Group](https://www.w3.org/groups/cg/webmachinelearning/). It lets websites expose structured, callable **tools** directly to AI agents through the browser's `navigator.modelContext` API.

Instead of clicking buttons and parsing screenshots, AI agents get a structured contract:

- **Tool discovery** - agents learn what actions a page supports (add to cart, search, fill form, etc.)
- **Typed inputs/outputs** - each tool declares a JSON schema for its parameters and return values
- **Reliable execution** - tool calls go through the browser's native API, not fragile DOM selectors
 
  **WebMCP vs MCP:** WebMCP operates **client-side within the browser** (via `navigator.modelContext`), while the broader [Model Context Protocol](https://modelcontextprotocol.io/) is a back-end protocol connecting AI platforms to service providers through hosted servers. They share the same wire format but serve different layers of the stack. 

## How It Works with Cloud Browser

 When you enable MCP on a Cloud Browser session, the Scrapfly MCP Server connects to the browser via CDP (Chrome DevTools Protocol) and registers two types of tools:

1. **Browser interaction tools**, human-like actions (clickOn, fill, typeText, scroll, pressKey, hover, selectOption, etc.) that call [Antibot CDP commands](https://scrapfly.io/docs/cloud-browser-api/cdp-reference/Antibot) directly. These are always available and use realistic mouse/keyboard timing.
2. **Page-registered WebMCP tools**, if the web page supports [WebMCP](https://developer.chrome.com/blog/webmcp-epp), it can register custom tools (e.g. `addToCart`, `searchProducts`) via `navigator.modelContext.registerTool()`. These are discovered automatically and invoked via the `WebMCP.invokeTool` CDP command.
3. **Native CDP tools**, screenshot, evaluate JavaScript, get accessibility snapshot, page URL, and performance metrics, all via CDP.
 
- [  Via Scrapfly MCP Server ](#scrapfly-mcp-panel)
- [  Direct Browser (chrome-devtools-mcp) ](#direct-mcp-panel)
 
 The Scrapfly MCP Server manages the browser session and proxies all tool calls through a single MCP connection. Browser actions use human-like [Antibot CDP commands](https://scrapfly.io/docs/cloud-browser-api/cdp-reference/Antibot) with realistic mouse/keyboard timing. Any MCP-compatible AI agent works out of the box.



 

 Scrapium exposes a **streamable-HTTP MCP endpoint** at `/mcp` that wraps the CDP WebSocket into the MCP protocol, the same approach as [chrome-devtools-mcp](https://github.com/GoogleChromeLabs/chrome-devtools-mcp) (Node.js) but built in Go and running alongside the browser. Any MCP client can connect directly to this endpoint. Browser interactions use [Antibot CDP commands](https://scrapfly.io/docs/cloud-browser-api/cdp-reference/Antibot) for human-like timing.



 

 

## WebMCP CDP Domain

 The browser exposes registered MCP tools through the [**WebMCP**](https://scrapfly.io/docs/cloud-browser-api/cdp-reference/WebMCP) CDP domain. This is the low-level interface that powers the streamable-HTTP MCP endpoint.

 | Symbol | Type | Description |
|---|---|---|
| `<a href="/docs/cloud-browser-api/cdp-reference/WebMCP#method-enable">WebMCP.enable</a>` | Command | Start receiving tool events. Fires an initial `toolsAdded` event with all available Antibot tools and their JSON schemas. |
| `<a href="/docs/cloud-browser-api/cdp-reference/WebMCP#method-callTool">WebMCP.callTool</a>` | Command | Invoke a tool by name with arguments. Routes to the corresponding [Antibot](https://scrapfly.io/docs/cloud-browser-api/cdp-reference/Antibot) handler. Returns `success`, `result`, and `errorMessage`. |
| `<a href="/docs/cloud-browser-api/cdp-reference/WebMCP#event-toolsAdded">WebMCP.toolsAdded</a>` | Event | Fired when tools become available. Each tool carries a `name`, `description`, and `inputSchema`. |
| `<a href="/docs/cloud-browser-api/cdp-reference/WebMCP#event-toolsRemoved">WebMCP.toolsRemoved</a>` | Event | Fired when tools are removed. |

### Raw CDP Example

 You can interact with the browser directly over CDP. Browser interaction tools use the `Antibot.*` CDP commands, while page-registered tools use `WebMCP.invokeTool`:

 ```
// 1. Call Antibot.clickOn directly via CDP:
{"id": 1, "method": "Antibot.clickOn", "params": {
  "selector": {"type": "axNodeId", "query": "42"}
}}

// Result:
{"id": 1, "result": {"success": true}}

// 2. Enable WebMCP to discover page-registered tools:
{"id": 2, "method": "WebMCP.enable"}

// If the page registered tools, you receive toolsAdded:
{
  "method": "WebMCP.toolsAdded",
  "params": {
    "tools": [
      {"name": "addToCart", "description": "Add product to cart", "inputSchema": {...}, "frameId": "ABC123"},
      {"name": "searchProducts", "description": "Search by keyword", "inputSchema": {...}, "frameId": "ABC123"}
    ]
  }
}

// 3. Invoke a page-registered tool:
{"id": 3, "method": "WebMCP.invokeTool", "params": {
  "frameId": "ABC123",
  "toolName": "addToCart",
  "input": {"title": "Widget Alpha"}
}}

// Response (invocationId):
{"id": 3, "result": {"invocationId": "inv-001"}}

// Then toolResponded event:
{"method": "WebMCP.toolResponded", "params": {
  "invocationId": "inv-001",
  "status": "Success",
  "output": {"added": true, "product": "Widget Alpha"}
}}
```

 

   

 

### Available Tools

 When a browser session is opened, the MCP server registers these tool categories:

#### Browser Interaction Tools (`action_*`)

 Human-like browser interactions using the [Antibot CDP domain](https://scrapfly.io/docs/cloud-browser-api/cdp-reference/Antibot). All actions use realistic mouse movement (Bezier curves) and keyboard timing.

 | Tool | Description |
|---|---|
| `clickOn` | Click on an element with human-like mouse movement |
| `fill` | Click a form field, optionally clear, then type text |
| `typeText` | Type text at the current cursor position |
| `pressKey` | Press a keyboard key (Enter, Tab, Ctrl+a, etc.) |
| `scroll` | Scroll element into view, to bottom, or by delta |
| `hover` | Hover over an element |
| `selectOption` | Select from a dropdown (native or custom) |
| `waitForElement` | Wait for an element to appear |
| See all actions in the [Antibot CDP Reference](https://scrapfly.io/docs/cloud-browser-api/cdp-reference/Antibot) |

#### Page-Registered WebMCP Tools (`webmcp_*`)

 If the web page registers tools via `navigator.modelContext.registerTool()`, they are discovered automatically and exposed as `webmcp_*` tools. These are semantic page-level actions, for example, an e-commerce page might expose `addToCart`, `searchProducts`, or `getProductDetails`.

#### Native CDP Tools (`browser_*`)

 | Tool | Description |
|---|---|
| `take_screenshot` | Capture the page as PNG (full page or element) |
| `evaluate_script` | Execute JavaScript and return the result |
| `take_snapshot` | Get page text content (DOM snapshot) |
| `get_performance_metrics` | Page load timing, resource counts, JS heap |

### Try It: Register and Call a Custom Tool

 This end-to-end example connects to a Cloud Browser with MCP enabled, injects a custom tool into the page via `navigator.modelContext`, then discovers and calls it through the WebMCP CDP domain.

    Python (Playwright)    JavaScript (Puppeteer)  

  ```
import asyncio
import json
from playwright.async_api import async_playwright

WS_URL = 'wss://browser.scrapfly.io?api_key=&enable_mcp=true'

async def main():
    async with async_playwright() as pw:
        browser = await pw.chromium.connect_over_cdp(WS_URL)
        page = browser.contexts[0].pages[0]

        # Navigate to any page
        await page.goto('https://web-scraping.dev/mcp-tools')

        # Register a custom MCP tool on the page
        await page.evaluate('''() => {
            navigator.modelContext.registerTool({
                name: 'getProductCount',
                description: 'Returns the number of products on the page',
                inputSchema: { type: 'object', properties: {} },
                handler: async () => {
                    const items = document.querySelectorAll('.product-item');
                    return { count: items.length };
                }
            });
        }''')

        # Get a CDP session
        cdp = await page.context.new_cdp_session(page)

        # Enable the WebMCP domain and collect tools
        tools = []
        cdp.on('WebMCP.toolsAdded', lambda params: tools.extend(params['tools']))
        await cdp.send('WebMCP.enable')

        # Print discovered tools
        print(f'Discovered {len(tools)} tools:')
        for tool in tools:
            print(f'  - {tool["name"]}: {tool["description"]}')

        # Find and call our custom tool
        result = await cdp.send('WebMCP.callTool', {
            'name': 'getProductCount',
            'arguments': '{}'
        })
        print(f'Result: {json.dumps(result)}')

        await browser.close()

asyncio.run(main())
```

 

   

 

 

 ```
const puppeteer = require('puppeteer-core');

const WS_URL = 'wss://browser.scrapfly.io?api_key=&enable_mcp=true';

(async () => {
    const browser = await puppeteer.connect({ browserWSEndpoint: WS_URL });
    const [page] = await browser.pages();

    // Navigate to any page
    await page.goto('https://web-scraping.dev/mcp-tools');

    // Register a custom MCP tool on the page
    await page.evaluate(() => {
        navigator.modelContext.registerTool({
            name: 'getProductCount',
            description: 'Returns the number of products on the page',
            inputSchema: { type: 'object', properties: {} },
            handler: async () => {
                const items = document.querySelectorAll('.product-item');
                return { count: items.length };
            }
        });
    });

    // Get a CDP session
    const cdp = await page.createCDPSession();

    // Enable the WebMCP domain and collect tools
    const tools = [];
    cdp.on('WebMCP.toolsAdded', (params) => tools.push(...params.tools));
    await cdp.send('WebMCP.enable');

    // Print discovered tools
    console.log(`Discovered ${tools.length} tools:`);
    tools.forEach(t => console.log(`  - ${t.name}: ${t.description}`));

    // Find and call our custom tool
    const result = await cdp.send('WebMCP.callTool', {
        name: 'getProductCount',
        arguments: '{}'
    });
    console.log('Result:', JSON.stringify(result));

    await browser.close();
})();
```

 

   

 

 

 

 Expected output:

 ```
Discovered 18 tools:
  - clickOn: Click on an element
  - fill: Fill a form field
  - type: Type text into an element
  ...
  - getProductCount: Returns the number of products on the page
Result: {"success": true, "result": "{"count":5}", "errorMessage": null}
```

 

   

 

 The output shows both the built-in Antibot tools and your custom `getProductCount` tool. The custom tool runs in the page context and returns structured data.

### Test with an AI Agent

 The best way to validate your MCP setup is to let an AI agent discover and call tools autonomously. Using the [Scrapfly MCP Server](https://scrapfly.io/docs/mcp/getting-started), an agent like Claude can open a Cloud Browser session, discover the available WebMCP tools, and use them to interact with the page.

 Try this prompt in the [MCP Playground](https://scrapfly.io/dashboard/playground/mcp) or with any MCP-compatible AI agent:

 ```
Open a cloud browser on https://web-scraping.dev/mcp-tools,
list the available page MCP tools,
and use them to search for products
```

 

   

 

The agent will:

1. Call `cloud_browser_open` with `enable_mcp=true`
2. Receive a `notifications/tools/list_changed` with the discovered WebMCP tools
3. See tools like `webmcp_{session}_clickOn`, `webmcp_{session}_fill`, etc. in `tools/list`
4. Call the tools to interact with the page (search, click, scroll)
5. Return the results as structured data
 
 This validates the full chain: browser allocation, MCP endpoint, tool discovery, and tool invocation. If the agent can call a `webmcp_*` tool and get a result, your setup works.

  **Tip:** Most users should connect to the [streamable-HTTP MCP endpoint](#mcp-endpoint) or use the [Scrapfly MCP Server](#mcp-server-integration) instead of using the CDP domain directly. The raw CDP domain is available for advanced use cases or when you need fine-grained control. 

 For the complete type definitions, parameters, and event payloads, see the [WebMCP CDP Reference](https://scrapfly.io/docs/cloud-browser-api/cdp-reference/WebMCP).

## Enabling MCP

 Add `enable_mcp=true` to your Cloud Browser WebSocket URL:

 ```
wss://browser.scrapfly.io?api_key=&enable_mcp=true
```

 

   

 

Or use the SDK:

    Python    TypeScript    Go    Rust  

  ```
from scrapfly import ScrapflyClient, BrowserConfig

client = ScrapflyClient(key='')

config = BrowserConfig(
    enable_mcp=True,
)

ws_url = client.cloud_browser(config)
# ws_url contains the CDP WebSocket URL
# The MCP endpoint is returned in the allocation response
```

 

   

 

 

 ```
import { ScrapflyClient, BrowserConfig } from 'scrapfly-sdk';

const client = new ScrapflyClient({ key: '' });

const wsUrl = client.cloudBrowser({
    enable_mcp: true,
});
// wsUrl contains the CDP WebSocket URL
// The MCP endpoint is returned in the allocation response
```

 

   

 

 

 ```
client, _ := scrapfly.New("")

wsURL := client.CloudBrowser(&scrapfly.CloudBrowserConfig{
    EnableMCP: true,
})
// wsURL contains the CDP WebSocket URL
// The MCP endpoint is returned in the allocation response
```

 

   

 

 

 ```
use scrapfly::ScrapflyClient;
use scrapfly::cloud_browser::BrowserConfig;

let client = ScrapflyClient::new("");

let config = BrowserConfig {
    enable_mcp: true,
    ..Default::default()
};

let ws_url = client.cloud_browser_url(&config);
// ws_url contains the CDP WebSocket URL
// The MCP endpoint is returned in the allocation response
```

 

   

 

 

 

## MCP Endpoint

 When `enable_mcp=true` is set, the browser allocation response includes an `mcp_endpoint` field, a streamable-HTTP URL your AI agent can connect to:

 ```
{
    "browser_version": "Chrome/147.0.7727.55",
    "web_socket_debugger_url": "wss://...",
    "mcp_endpoint": "https://...:9223/mcp",
    "allocation_id": "..."
}
```

 

   

 

 The MCP endpoint supports the [streamable-HTTP transport](https://modelcontextprotocol.io/specification/2025-03-26/basic/transports#streamable-http), which is the standard MCP transport for server-sent events over HTTP. Any MCP-compatible client can connect to it.

## Use Cases

- **AI agent browser automation** - let Claude, GPT, or custom agents interact with websites through structured tool calls instead of fragile selectors
- **Structured data extraction** - when a website exposes WebMCP tools, agents can call `getProducts()` or `search(query)` directly instead of scraping HTML
- **Form filling and checkout flows** - WebMCP tools provide typed schemas for form inputs, reducing errors in multi-step workflows
- **Testing AI-ready websites** - validate that your site's WebMCP tool registrations work correctly when called by an AI agent
 
## Compatibility

 WebMCP is an early-preview feature available since Chrome 146 Canary. Scrapium ships with these feature flags enabled:

 | Chrome Feature Flag | Purpose |
|---|---|
| `DevToolsWebMCPSupport` | Enables the WebMCP CDP domain in DevTools, allowing CDP clients to discover and invoke page-registered MCP tools |
| `WebMCPTesting` | Activates the `navigator.modelContext` JavaScript API for tool registration on all pages (not just flagged origins) |

 Scrapium's MCP implementation is fully compliant with [chrome-devtools-mcp](https://github.com/ChromeDevTools/chrome-devtools-mcp). Any MCP client that works with chrome-devtools-mcp works with Scrapium out of the box.

  **Early Preview:** WebMCP is a proposed standard under active development. The API surface and tool discovery behavior may change as the specification evolves through the W3C standardization process. See the [Chrome blog post](https://developer.chrome.com/blog/webmcp-epp) for the latest status. 

## Scrapfly MCP Server Integration

 If you use the [Scrapfly MCP Server](https://scrapfly.io/docs/mcp/getting-started), the browser's WebMCP tools are **automatically proxied** through your existing MCP connection. You don't need to connect to the Scrapium MCP endpoint directly.

#### How it works

1. **Open a browser session**Call [`cloud_browser_open`](https://scrapfly.io/docs/mcp/tools#cloud_browser_open) via the Scrapfly MCP Server. The server allocates a Cloud Browser and connects via CDP WebSocket.
2. **Browser interaction tools registered**Action tools (`action_*`) are registered statically, they call `Antibot.*` CDP commands directly with realistic mouse/keyboard timing.
3. **Native CDP tools registered**Native tools (`browser_*`) are registered for screenshot, evaluate JavaScript, accessibility snapshot, and performance metrics.
4. **Page WebMCP tools discovered**If the page registers tools via `navigator.modelContext.registerTool()`, they are discovered via `WebMCP.toolsAdded` events and registered as `webmcp_*`.
5. **Agent notified**Your agent receives a `notifications/tools/list_changed` notification with all available tools.
6. **Tool calls proxied**`action_*` → `Antibot.*` CDP, `webmcp_*` → `WebMCP.invokeTool` CDP, all through a single MCP connection.
 
 ```
AI Agent (Claude, GPT, ...)
    │
    │  Single MCP connection
    ▼
Scrapfly MCP Server
    │
    │ tools/list includes:
    │  web_scrape, screenshot, ...
    │  + webmcp_abc123_search        ← from Scrapium
    │  + webmcp_abc123_addToCart     ← from Scrapium
    │
    │ tools/call proxied to Scrapium
    ▼
Scrapium (WebMCP)
    │
    │ Page executes tool
    ▼
Structured result
```

  **Single connection, all tools:** Your agent connects to the Scrapfly MCP Server once and gets scraping tools, screenshot tools, AND page-specific WebMCP tools, all through the same MCP transport. No need to manage a second connection to Scrapium's MCP endpoint. 

 When you call `cloud_browser_navigate`, old tools are removed and new ones are discovered on the target page. When you call `cloud_browser_close`, all dynamic tools are unregistered.

 [See the full Cloud Browser MCP tools documentation →](https://scrapfly.io/docs/mcp/tools#cloud_browser_open)

## Default Behavior

 MCP is **disabled by default**. When `enable_mcp` is not set (or set to `false`), the browser launches without the WebMCP feature flags and no MCP endpoint is exposed. This ensures zero overhead for sessions that don't need AI agent integration.

## Connection Parameters

 | Parameter | Required | Default | Description |
|---|---|---|---|
| `enable_mcp` | No | `false` | Enable Scrapium's built-in MCP support. When `true`, the browser is launched with WebMCP feature flags and an MCP streamable-HTTP endpoint is exposed in the allocation response. |

 See the full list of connection parameters in the [Getting Started](https://scrapfly.io/docs/cloud-browser-api/getting-started#connection-parameters) guide.