# Scrapfly Documentation

## Table of Contents

### Dashboard

- [Intro](https://scrapfly.io/docs)
- [Project](https://scrapfly.io/docs/project)
- [Account](https://scrapfly.io/docs/account)
- [Workspace & Team](https://scrapfly.io/docs/workspace-and-team)
- [Billing](https://scrapfly.io/docs/billing)

### Products

#### MCP Server

- [Getting Started](https://scrapfly.io/docs/mcp/getting-started)
- [Tools & API Spec](https://scrapfly.io/docs/mcp/tools)
- [Authentication](https://scrapfly.io/docs/mcp/authentication)
- [Examples & Use Cases](https://scrapfly.io/docs/mcp/examples)
- [FAQ](https://scrapfly.io/docs/mcp/faq)
##### Integrations

- [Overview](https://scrapfly.io/docs/mcp/integrations)
- [Claude Desktop](https://scrapfly.io/docs/mcp/integrations/claude-desktop)
- [Claude Code](https://scrapfly.io/docs/mcp/integrations/claude-code)
- [ChatGPT](https://scrapfly.io/docs/mcp/integrations/chatgpt)
- [Cursor](https://scrapfly.io/docs/mcp/integrations/cursor)
- [Cline](https://scrapfly.io/docs/mcp/integrations/cline)
- [Windsurf](https://scrapfly.io/docs/mcp/integrations/windsurf)
- [Zed](https://scrapfly.io/docs/mcp/integrations/zed)
- [Roo Code](https://scrapfly.io/docs/mcp/integrations/roo-code)
- [VS Code](https://scrapfly.io/docs/mcp/integrations/vscode)
- [LangChain](https://scrapfly.io/docs/mcp/integrations/langchain)
- [LlamaIndex](https://scrapfly.io/docs/mcp/integrations/llamaindex)
- [CrewAI](https://scrapfly.io/docs/mcp/integrations/crewai)
- [OpenAI](https://scrapfly.io/docs/mcp/integrations/openai)
- [n8n](https://scrapfly.io/docs/mcp/integrations/n8n)
- [Make](https://scrapfly.io/docs/mcp/integrations/make)
- [Zapier](https://scrapfly.io/docs/mcp/integrations/zapier)
- [Vapi AI](https://scrapfly.io/docs/mcp/integrations/vapi)
- [Agent Builder](https://scrapfly.io/docs/mcp/integrations/agent-builder)
- [Custom Client](https://scrapfly.io/docs/mcp/integrations/custom-client)


#### Web Scraping API

- [Getting Started](https://scrapfly.io/docs/scrape-api/getting-started)
- [API Specification]()
- [Monitoring](https://scrapfly.io/docs/monitoring)
- [Customize Request](https://scrapfly.io/docs/scrape-api/custom)
- [Debug](https://scrapfly.io/docs/scrape-api/debug)
- [Anti Scraping Protection](https://scrapfly.io/docs/scrape-api/anti-scraping-protection)
- [Proxy](https://scrapfly.io/docs/scrape-api/proxy)
- [Proxy Mode](https://scrapfly.io/docs/scrape-api/proxy-mode)
- [Proxy Mode - Screaming Frog](https://scrapfly.io/docs/scrape-api/proxy-mode/screaming-frog)
- [Proxy Mode - Apify](https://scrapfly.io/docs/scrape-api/proxy-mode/apify)
- [(Auto) Data Extraction](https://scrapfly.io/docs/scrape-api/extraction)
- [Javascript Rendering](https://scrapfly.io/docs/scrape-api/javascript-rendering)
- [Javascript Scenario](https://scrapfly.io/docs/scrape-api/javascript-scenario)
- [SSL](https://scrapfly.io/docs/scrape-api/ssl)
- [DNS](https://scrapfly.io/docs/scrape-api/dns)
- [Cache](https://scrapfly.io/docs/scrape-api/cache)
- [Session](https://scrapfly.io/docs/scrape-api/session)
- [Webhook](https://scrapfly.io/docs/scrape-api/webhook)
- [Screenshot](https://scrapfly.io/docs/scrape-api/screenshot)
- [Errors](https://scrapfly.io/docs/scrape-api/errors)
- [Timeout](https://scrapfly.io/docs/scrape-api/understand-timeout)
- [Throttling](https://scrapfly.io/docs/throttling)
- [Troubleshoot](https://scrapfly.io/docs/scrape-api/troubleshoot)
- [Billing](https://scrapfly.io/docs/scrape-api/billing)
- [FAQ](https://scrapfly.io/docs/scrape-api/faq)

#### Crawler API

- [Getting Started](https://scrapfly.io/docs/crawler-api/getting-started)
- [API Specification]()
- [Retrieving Results](https://scrapfly.io/docs/crawler-api/results)
- [WARC Format](https://scrapfly.io/docs/crawler-api/warc-format)
- [Data Extraction](https://scrapfly.io/docs/crawler-api/extraction-rules)
- [Webhook](https://scrapfly.io/docs/crawler-api/webhook)
- [Billing](https://scrapfly.io/docs/crawler-api/billing)
- [Errors](https://scrapfly.io/docs/crawler-api/errors)
- [Troubleshoot](https://scrapfly.io/docs/crawler-api/troubleshoot)
- [FAQ](https://scrapfly.io/docs/crawler-api/faq)

#### Screenshot API

- [Getting Started](https://scrapfly.io/docs/screenshot-api/getting-started)
- [API Specification]()
- [Accessibility Testing](https://scrapfly.io/docs/screenshot-api/accessibility)
- [Webhook](https://scrapfly.io/docs/screenshot-api/webhook)
- [Billing](https://scrapfly.io/docs/screenshot-api/billing)
- [Errors](https://scrapfly.io/docs/screenshot-api/errors)

#### Extraction API

- [Getting Started](https://scrapfly.io/docs/extraction-api/getting-started)
- [API Specification]()
- [Rules Template](https://scrapfly.io/docs/extraction-api/rules-and-template)
- [LLM Extraction](https://scrapfly.io/docs/extraction-api/llm-prompt)
- [AI Auto Extraction](https://scrapfly.io/docs/extraction-api/automatic-ai)
- [Webhook](https://scrapfly.io/docs/extraction-api/webhook)
- [Billing](https://scrapfly.io/docs/extraction-api/billing)
- [Errors](https://scrapfly.io/docs/extraction-api/errors)
- [FAQ](https://scrapfly.io/docs/extraction-api/faq)

#### Proxy Saver

- [Getting Started](https://scrapfly.io/docs/proxy-saver/getting-started)
- [Fingerprints](https://scrapfly.io/docs/proxy-saver/fingerprints)
- [Optimizations](https://scrapfly.io/docs/proxy-saver/optimizations)
- [SSL Certificates](https://scrapfly.io/docs/proxy-saver/certificates)
- [Protocols](https://scrapfly.io/docs/proxy-saver/protocols)
- [Pacfile](https://scrapfly.io/docs/proxy-saver/pacfile)
- [Secure Credentials](https://scrapfly.io/docs/proxy-saver/security)
- [Billing](https://scrapfly.io/docs/proxy-saver/billing)

#### Cloud Browser API

- [Getting Started](https://scrapfly.io/docs/cloud-browser-api/getting-started)
- [Proxy & Geo-Targeting](https://scrapfly.io/docs/cloud-browser-api/proxy)
- [Unblock API](https://scrapfly.io/docs/cloud-browser-api/unblock)
- [File Downloads](https://scrapfly.io/docs/cloud-browser-api/file-downloads)
- [Session Resume](https://scrapfly.io/docs/cloud-browser-api/session-resume)
- [Human-in-the-Loop](https://scrapfly.io/docs/cloud-browser-api/human-in-the-loop)
- [Debug Mode](https://scrapfly.io/docs/cloud-browser-api/debug-mode)
- [Bring Your Own Proxy](https://scrapfly.io/docs/cloud-browser-api/bring-your-own-proxy)
- [Browser Extensions](https://scrapfly.io/docs/cloud-browser-api/extensions)
- [Native Browser MCP](https://scrapfly.io/docs/cloud-browser-api/mcp)
- [DevTools Protocol](https://scrapfly.io/docs/cloud-browser-api/cdp-reference)
##### Integrations

- [Puppeteer](https://scrapfly.io/docs/cloud-browser-api/puppeteer)
- [Playwright](https://scrapfly.io/docs/cloud-browser-api/playwright)
- [Selenium](https://scrapfly.io/docs/cloud-browser-api/selenium)
- [Vercel Agent Browser](https://scrapfly.io/docs/cloud-browser-api/agent-browser)
- [Browser Use](https://scrapfly.io/docs/cloud-browser-api/browser-use)
- [Stagehand](https://scrapfly.io/docs/cloud-browser-api/stagehand)

- [Billing](https://scrapfly.io/docs/cloud-browser-api/billing)
- [Errors](https://scrapfly.io/docs/cloud-browser-api/errors)


### Tools

- [Antibot Detector](https://scrapfly.io/docs/tools/antibot-detector)

### SDK

- [Golang](https://scrapfly.io/docs/sdk/golang)
- [Python](https://scrapfly.io/docs/sdk/python)
- [Rust](https://scrapfly.io/docs/sdk/rust)
- [TypeScript](https://scrapfly.io/docs/sdk/typescript)
- [Scrapy](https://scrapfly.io/docs/sdk/scrapy)

### Integrations

- [Getting Started](https://scrapfly.io/docs/integration/getting-started)
- [LangChain](https://scrapfly.io/docs/integration/langchain)
- [LlamaIndex](https://scrapfly.io/docs/integration/llamaindex)
- [CrewAI](https://scrapfly.io/docs/integration/crewai)
- [Zapier](https://scrapfly.io/docs/integration/zapier)
- [Make](https://scrapfly.io/docs/integration/make)
- [n8n](https://scrapfly.io/docs/integration/n8n)

### Academy

- [Overview](https://scrapfly.io/academy)
- [Web Scraping Overview](https://scrapfly.io/academy/scraping-overview)
- [Tools](https://scrapfly.io/academy/tools-overview)
- [Reverse Engineering](https://scrapfly.io/academy/reverse-engineering)
- [Static Scraping](https://scrapfly.io/academy/static-scraping)
- [HTML Parsing](https://scrapfly.io/academy/html-parsing)
- [Dynamic Scraping](https://scrapfly.io/academy/dynamic-scraping)
- [Hidden API Scraping](https://scrapfly.io/academy/hidden-api-scraping)
- [Headless Browsers](https://scrapfly.io/academy/headless-browsers)
- [Hidden Web Data](https://scrapfly.io/academy/hidden-web-data)
- [JSON Parsing](https://scrapfly.io/academy/json-parsing)
- [Data Processing](https://scrapfly.io/academy/data-processing)
- [Scaling](https://scrapfly.io/academy/scaling)
- [Walkthrough Summary](https://scrapfly.io/academy/walkthrough-summary)
- [Scraper Blocking](https://scrapfly.io/academy/scraper-blocking)
- [Proxies](https://scrapfly.io/academy/proxies)

---

# MCP Tools &amp; API Specification

 [  View as markdown ](https://scrapfly.io/?view=markdown)   Copy for LLM    Copy for LLM  [     Open in ChatGPT ](https://chatgpt.com/?hints=search&prompt=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fmcp%2Ftools%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Claude ](https://claude.ai/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fmcp%2Ftools%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Perplexity ](https://www.perplexity.ai/search/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fmcp%2Ftools%20so%20I%20can%20ask%20questions%20about%20it.) 

 

 

  The Scrapfly MCP Server provides **10 powerful tools** covering web scraping, screenshots, and live Cloud Browser sessions with AI-native MCP integration.

  **Pro Tip:** Always call `scraping_instruction_enhanced` first to get best practices and understand the `pow` (proof of work) parameter required by scraping tools. 

 

 

## Tools Overview

    instructions Best Practices Call First!    web\_get\_page Quick &amp; Simple Simple    web\_scrape Advanced Control Advanced    screenshot Visual Capture Simple    info\_account Usage Stats Simple    cloud\_browser\_open Browser Session New    cloud\_browser\_navigate Navigate Page New    cloud\_browser\_close End Session New    cloud\_browser\_sessions List Sessions New    check\_if\_blocked Block Detection New  

 ##   scraping\_instruction\_enhanced 

 Returns critical instructions and best practices for using Scrapfly scraping tools. Helps AI models make intelligent decisions about which parameters to use.

  **Important:** Call this tool **before** using `web_get_page` or `web_scrape`. It provides the required `pow` parameter and parameter guidance. 

##### Provides:

- **Parameter guidance** - When to use which options
- **Best practices** - Optimize for success rate and cost
 
 

- **Error handling** - What to do when things fail
- **POW value** - Required proof of work parameter
 
 

 

#### Example Usage

 ```
{ "tool": "scraping_instruction_enhanced" }
```

 

   

 

 

##   web\_get\_page 

 Quick page fetch with sane defaults. Perfect for when you just need the content fast without complex configuration.

#####  What it does automatically:

- Renders JavaScript by default
- Returns clean markdown or text content
 
 

- Handles anti-scraping protection
- Uses optimal defaults for most websites
 
 

 

 

 

 | Parameter | Type | Description |
|---|---|---|
| **Required Parameters** |
| `url` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_url) | string | Target URL to scrape (must start with http:// or https://) |
| `pow` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_pow) | string | Proof of work value from `scraping_instruction_enhanced` |
| **Optional Parameters** |
| `format` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_format) | string | Output format: `markdown` (default), `text`, `json`, `clean_html`, `raw` |
| `format_options` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_format_options) | array | Format modifiers: `no_links`, `no_images`, `only_content` |
| `country` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_country) | string | ISO country code for proxy (e.g., `us`, `gb`, `de`) |
| `proxy_pool` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_proxy_pool) | string | Proxy type: `public_datacenter_pool` (default), `public_residential_pool` |
| `rendering_wait` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_rendering_wait) | integer | Wait time in milliseconds before capturing content |
| `capture_page` | boolean | Also capture a screenshot of the page |
| `capture_flags` | array | Screenshot options: `load_images`, `dark_mode`, `block_banners`, `print_media_format`, `high_quality` |
| `extraction_model` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_extraction_model) | string | Auto-extract structured data: `article`, `product`, `job_posting`, etc. |

#### Example Usage

 ```
{
  "tool": "web_get_page",
  "parameters": {
    "url": "https://news.ycombinator.com",
    "pow": "obtained_from_instruction_tool",
    "format": "markdown",
    "format_options": ["only_content"]
  }
}
```

 

   

 

 

##   web\_scrape 

 Advanced scraping tool with full control over every aspect. JavaScript rendering, custom headers, cookies, POST requests, and sophisticated browser automation scenarios.

#####  Enterprise-grade control:

- **Browser automation** - Multi-step interactions
- **Authentication** - Login flows with cookies
 
 

- **Custom requests** - POST/PUT/PATCH
- **LLM extraction** - AI-powered data extraction
 
 

 

 

 

 | Parameter | Type | Description |
|---|---|---|
| **Required Parameters** |
| `url` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_url) | string | Target URL to scrape |
| `pow` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_pow) | string | Proof of work value from `scraping_instruction_enhanced` |
| **Optional Parameters** |
| `render_js` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_render_js) | boolean | Enable JavaScript rendering with headless browser (default: true)  [ JavaScript rendering guide](https://scrapfly.io/docs/scrape-api/javascript-rendering) |
| `js_scenario` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_js_scenario) | array | Browser automation steps: `click`, `fill`, `scroll`, `wait`, `execute`, `condition`  [ Complete JS scenario reference](https://scrapfly.io/docs/scrape-api/javascript-scenario) |
| `asp` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_asp) | boolean | Enable Anti Scraping Protection (default: true)  [ Learn about ASP](https://scrapfly.io/docs/scrape-api/anti-scraping-protection) |
| `extraction_prompt` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_extraction_prompt) | string | LLM prompt for AI-powered data extraction  [ LLM extraction guide](https://scrapfly.io/docs/scrape-api/extraction#llm_extraction) |
| `extraction_model` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_extraction_model) | string | Pre-trained extraction model (product, article, etc.)  [ Available models](https://scrapfly.io/docs/scrape-api/extraction#ai_automatic_extraction) |
| `format` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_format) | string | Output format: `markdown` (default), `text`, `json`, `clean_html`, `raw` |
| `format_options` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_format_options) | array | Format modifiers: `no_links`, `no_images`, `only_content` |
| `method` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_method) | string | HTTP method: GET (default), POST, PUT, PATCH, OPTIONS |
| `headers` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_headers) | object | Custom HTTP headers |
| `cookies` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_cookies) | array | Cookies to send with request  [ Session management](https://scrapfly.io/docs/scrape-api/session) |
| `body` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_body) | string | Request body for POST/PUT/PATCH |
| `screenshots` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_screenshots) | array | Capture multiple screenshots (fullpage or CSS selector)  [ Screenshot API reference](https://scrapfly.io/docs/scrape-api/screenshot) |
| `screenshot_flags` | array | Screenshot options: `load_images`, `dark_mode`, `block_banners`, `print_media_format`, `high_quality` |
| `cache` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_cache) | boolean | Enable response caching  [ Caching guide](https://scrapfly.io/docs/scrape-api/cache) |
| `cache_ttl` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_cache_ttl) | integer | Cache TTL in seconds when cache is true |
| `cache_clear` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_cache_clear) | boolean | If true, bypass &amp; clear cache for this URL |
| `retry` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_retry) | boolean | Enable automatic retry on transient errors (default: true) |
| `timeout` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_timeout) | integer | Server-side timeout in milliseconds |
| `rendering_wait` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_rendering_wait) | integer | Wait time in milliseconds before returning response  [ Learn about JavaScript rendering](https://scrapfly.io/docs/scrape-api/javascript-rendering) |
| `wait_for_selector` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_wait_for_selector) | string | Wait for this CSS selector to appear in the page when rendering JS |
| `js` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_js) | string | JavaScript to execute on the page |
| `lang` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_lang) | array | Languages to use for the request (Accept-Language header) |
| `country` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_country) | string | Proxy location (ISO 3166-1 alpha-2 code, e.g., "us", "gb", "de")  [ Geo-targeting options](https://scrapfly.io/docs/scrape-api/proxy#geo) |
| `proxy_pool` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_proxy_pool) | string | `public_datacenter_pool` (default) or `public_residential_pool`  [ Compare proxy pools](https://scrapfly.io/docs/scrape-api/proxy#proxy-pools) |

#### Example: Login Flow

 ```
{
  "tool": "web_scrape",
  "parameters": {
    "url": "https://web-scraping.dev/login",
    "pow": "obtained_from_instruction_tool",
    "render_js": true,
    "js_scenario": [
      { "fill": { "selector": "input[name='username']", "value": "myuser" } },
      { "fill": { "selector": "input[name='password']", "value": "mypass" } },
      { "click": { "selector": "button[type='submit']" } },
      { "wait_for_navigation": { "timeout": 5000 } }
    ]
  }
}
```

 

   

 

#### Example: LLM Extraction

 ```
{
  "tool": "web_scrape",
  "parameters": {
    "url": "https://web-scraping.dev/products",
    "pow": "obtained_from_instruction_tool",
    "extraction_prompt": "Extract all product names, prices, and ratings as a JSON array"
  }
}
```

 

   

 

 

##   screenshot 

 Capture high-quality screenshots of any webpage. Full page or specific elements using CSS selectors.

  [ Complete Screenshot API documentation](https://scrapfly.io/docs/scrape-api/screenshot) #### Parameters

 | Parameter | Type | Description |
|---|---|---|
| **Required Parameters** |
| `url` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_url) | string | Target URL to capture |
| **Optional Parameters** |
| `capture` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_capture) | string | `fullpage` (default) or CSS selector |
| `format` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_format) | string | Image format: `jpg` (default), `png`, `webp`, `gif` |
| `resolution` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_resolution) | string | Screen resolution (e.g., "1920x1080", default) |
| `options` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_options) | array | Options: `load_images`, `dark_mode`, `block_banners`, `print_media_format`  [ Options reference](https://scrapfly.io/docs/scrape-api/screenshot#options) |
| `auto_scroll` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_auto_scroll) | boolean | Automatically scroll to load lazy content |
| `wait_for_selector` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_wait_for_selector) | string | CSS selector to wait for before capturing |
| `rendering_wait` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_rendering_wait) | integer | Wait time in milliseconds before capturing |
| `js` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_js) | string | JavaScript to execute before capturing |
| `country` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_country) | string | Proxy location (ISO 3166-1 alpha-2 code)  [ Geo-targeting](https://scrapfly.io/docs/scrape-api/proxy#geo) |
| `cache` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_cache) | boolean | Enable response caching |
| `cache_ttl` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_cache_ttl) | integer | Cache time-to-live in seconds |
| `cache_clear` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_cache_clear) | boolean | Bypass &amp; clear cache for this request |
| `webhook` [  ](https://scrapfly.io/docs/scrape-api/getting-started#api_param_webhook) | string | Webhook to call after the request completes |

#### Example Usage

 ```
{
  "tool": "screenshot",
  "parameters": {
    "url": "https://web-scraping.dev/pricing",
    "capture": ".pricing-table",
    "format": "png",
    "options": ["load_images", "block_banners"]
  }
}
```

 

   

 

 

##   info\_account 

 Get real-time information about your Scrapfly account, including subscription details, usage statistics, rate limits, and billing information.

##### Returns:

- **Account** - ID, currency, timezone, status
- **Project** - Name, quota, budget, networks
 
 

- **Subscription** - Plan, billing period, concurrency
- **Usage** - Credits, remaining quota, concurrent requests
 
 

 

  **No Parameters Required:** This tool uses your authenticated API key and requires no additional parameters. 

#### Example Usage

 ```
{ "tool": "info_account" }
```

 

   

 

 

##   cloud\_browser\_open 

 Open a Cloud Browser session with Scrapium's **native MCP** enabled. Navigates to a URL, discovers [WebMCP tools](https://scrapfly.io/docs/cloud-browser-api/mcp) exposed by the page, and dynamically registers them as first-class callable tools in your current MCP session.

####   Dynamic WebMCP Tool Discovery 

 Scrapium ships with a built-in MCP server that exposes page-registered tools via the [WebMCP](https://scrapfly.io/docs/cloud-browser-api/mcp) standard. When you call `cloud_browser_open`, the Scrapfly MCP server:

1. Allocates a Cloud Browser with MCP feature flags (`DevToolsWebMCPSupport`, `WebMCPTesting`)
2. Queries Scrapium's internal MCP endpoint (`tools/list`) to discover page tools
3. Dynamically registers each tool on the Scrapfly MCP server as `webmcp_{session}_{tool_name}`
4. Sends a `notifications/tools/list_changed` notification to your AI agent
 
 Your agent then sees these page tools in `tools/list` and can call them directly. When the tool is called, the Scrapfly MCP server **proxies the call** to Scrapium's native MCP endpoint — the browser executes the tool in the page context and returns the result.

 ```
Your AI Agent
    │
    │ tools/list → sees: web_scrape, screenshot, ...
    │              + webmcp_abc123_search        ← page tool
    │              + webmcp_abc123_addToCart      ← page tool
    │
    │ tools/call "webmcp_abc123_search"
    │   args: {"query": "laptop"}
    ▼
Scrapfly MCP Server
    │
    │ proxies to Scrapium's MCP endpoint
    ▼
Scrapium (WebMCP)
    │
    │ navigator.modelContext executes search()
    ▼
Page returns structured result
```

  **Tools are page-specific.** Different pages register different WebMCP tools. When you call `cloud_browser_navigate`, the old tools are removed and new ones are discovered on the target page. Not all websites register WebMCP tools — it's an [early-preview standard](https://developer.chrome.com/blog/webmcp-epp) currently adopted by a growing number of sites. 

#### Parameters

 | Parameter | Type | Description |
|---|---|---|
| `url` required | string | Target URL to open in the cloud browser |
| `country` | string | Proxy country code (ISO 3166-1 alpha-2) |
| `proxy_pool` | string | `datacenter` or `residential` |
| `timeout` | integer | Session timeout in seconds (default 900, max 1800) |
| `block_images` | boolean | Stub image requests to save bandwidth |
| `block_styles` | boolean | Stub stylesheet requests |
| `block_media` | boolean | Stub video/audio requests |
| `cache` | boolean | Enable HTTP cache for static resources |
| `debug` | boolean | Enable session recording for replay |

#### Returns

- **session\_id** — use with `cloud_browser_navigate` and `cloud_browser_close`
- **ws\_url** — CDP WebSocket URL (for direct Playwright/Puppeteer use)
- **mcp\_endpoint** — Scrapium's native MCP endpoint (internal, proxied automatically)
- **webmcp\_tools** — list of dynamically discovered page tools, now registered in your `tools/list`
 
#### Example Usage

 ```
{
  "tool": "cloud_browser_open",
  "parameters": {
    "url": "https://shop.example.com/products",
    "country": "us",
    "proxy_pool": "residential"
  }
}
```

 

   

 

#### Example Response

 ```
{
  "session_id": "unblock-a1b2c3-shop.example.com-01ABCD",
  "ws_url": "wss://browser.scrapfly.io/devtools/browser/...",
  "mcp_endpoint": "http://scrapium-agent:1213/mcp",
  "webmcp_tools": [
    {"tool_name": "webmcp_a1b2c3_search", "description": "Search products"},
    {"tool_name": "webmcp_a1b2c3_addToCart", "description": "Add item to cart"},
    {"tool_name": "webmcp_a1b2c3_getFilters", "description": "Get available filters"}
  ]
}
```

 

   

 

 After this response, your agent can call `webmcp_a1b2c3_search` directly via MCP — the Scrapfly server proxies the call to Scrapium's native MCP. [Learn more about Native Browser MCP →](https://scrapfly.io/docs/cloud-browser-api/mcp)

 

##   cloud\_browser\_navigate 

 Navigate an active Cloud Browser session to a new URL. Re-discovers WebMCP tools on the new page — old page tools are removed and new ones are registered automatically.

#### Parameters

 | Parameter | Type | Description |
|---|---|---|
| `session_id` required | string | Active session ID from `cloud_browser_open` |
| `url` required | string | URL to navigate to |

#### Example Usage

 ```
{
  "tool": "cloud_browser_navigate",
  "parameters": {
    "session_id": "unblock-abc123-example.com-01ABCD",
    "url": "https://shop.example.com/cart"
  }
}
```

 

   

 

 

##   cloud\_browser\_close 

 Close a Cloud Browser session and release all resources. Any dynamically registered WebMCP tools for this session are removed from `tools/list`.

  **Always close your sessions.** Open sessions continue to consume credits until the timeout expires. Call `cloud_browser_close` when you're done to stop billing immediately. 

#### Parameters

 | Parameter | Type | Description |
|---|---|---|
| `session_id` required | string | Session ID to terminate |

#### Example Usage

 ```
{
  "tool": "cloud_browser_close",
  "parameters": {
    "session_id": "unblock-abc123-example.com-01ABCD"
  }
}
```

 

   

 

 

##   cloud\_browser\_sessions 

 List all running Cloud Browser sessions for your account.

  **No Parameters Required:** This tool uses your authenticated API key and requires no additional parameters. 

#### Example Usage

 ```
{ "tool": "cloud_browser_sessions" }
```

 

   

 

 

##   check\_if\_blocked 

 Analyze a scrape result to detect if the page is blocked by an antibot service. Identifies the specific antibot provider and returns actionable recommendations to bypass the protection.

  **Zero Cost:** This tool performs pure local heuristic analysis — no API call is made and no credits are consumed. Use it after any scrape to verify the response is not a block page. 

####  Supported Antibot Services

- **Cloudflare** — UAM challenges, 1020 denied, turnstile
- **DataDome** — captcha, slider challenges
- **PerimeterX** — px-captcha, human verification
- **Akamai** — Bot Manager challenges
- **Kasada** — KP SDK challenges
 
 

- **Imperva / Incapsula** — session challenges
- **AWS WAF** — WAF blocks
- **Vercel** — attack mode rate limiting
- **Anubis** — proof-of-work challenges
- **F5 Shape Security** — bot defense
 
 

 

#### Parameters

 | Parameter | Type | Description |
|---|---|---|
| **Required Parameters** |
| `content` | string | Page content (HTML/text) from a scrape result. Use `raw` or `clean_html` format for best detection accuracy. |
| **Optional Parameters** |
| `status_code` | integer | HTTP status code from the scrape result (e.g. `403`, `429`, `503`). Improves detection accuracy. |
| `response_headers` | object | Response headers from the scrape result. Enables header-based antibot detection (e.g. `cf-mitigated`, `x-datadome`). |

#### Example Usage

After scraping a page, pass the result to `check_if_blocked`:

 ```
{
  "tool": "check_if_blocked",
  "parameters": {
    "content": "<title>Attention Required! | Cloudflare</title>...",
    "status_code": 403,
    "response_headers": {
      "cf-mitigated": "challenge",
      "cf-ray": "abc123-IAD"
    }
  }
}
```

 

   

 

#### Example Response

 ```
{
  "is_blocked": true,
  "antibot": "cloudflare",
  "block_type": "challenge",
  "confidence": "high",
  "details": "Cloudflare challenge detected via cf-mitigated header.",
  "recommendation": "Enable asp=true with render_js=true. If still blocked, try residential proxy pool."
}
```

 

   

 

#### Example: Not Blocked

 ```
{
  "is_blocked": false,
  "confidence": "high",
  "details": "No antibot blocking detected. The page content appears to be legitimate.",
  "recommendation": "No action needed — the page was fetched successfully."
}
```

 

   

 

  **Tip:** For best results, scrape with `format: "raw"` or `format: "clean_html"`. The `markdown` format strips HTML tags that contain antibot detection signals. Status code and header detection works regardless of content format. 

 

 

---

## Response Format

All scraping tools (`web_get_page` and `web_scrape`) return responses in this format:

 ```
{
  "content": "The scraped content in requested format",
  "status_code": 200,
  "content_type": "text/html; charset=utf-8",
  "extraction_result": { /* Only if extraction_model or extraction_prompt was used */ },
  "screenshots": { /* Only if screenshots were captured */ },
  "errors": null  // or error object if request failed
}
```

 

   

 

## Error Handling

 When a request fails, the `errors` field contains detailed information. [ View complete error reference](https://scrapfly.io/docs/scrape-api/errors)

   Example Error Response  

 ```
{
  "errors": {
    "code": "ERR::ASP::SHIELD_PROTECTION_FAILED",
    "message": "Anti-scraping protection failed after retries",
    "http_code": 422,
    "retryable": true,
    "doc_url": "https://scrapfly.io/docs/scrape-api/errors#asp-shield"
  }
}
```

 

   

 

 

 

 

 

### Common Error Scenarios

- **Retryable errors** - Automatically retried 
    Transient failures
- **Non-retryable errors** - Require config changes 
    Invalid parameters, quota exceeded
- **Rate limits** - Check `info_account` 
    Concurrency limits
 
 

- **ASP errors** - [Anti-scraping protection failures](https://scrapfly.io/docs/scrape-api/errors#asp)
- **Proxy errors** - [Proxy connection issues](https://scrapfly.io/docs/scrape-api/errors#proxy)
- **Throttle errors** - [Rate limiting and quota issues](https://scrapfly.io/docs/scrape-api/errors#throttle)
 
 

 

## Billing &amp; Cost Optimization

 Each scraping request consumes credits based on features used. [ Complete billing guide](https://scrapfly.io/docs/scrape-api/billing)

    Cost Cards Feature breakdown    Cost Visualization Visual flow diagram  

 ## 1-3

 credits---

 **Base cost**Simple requests

 

 

 

## +5

 credits---

 **JavaScript rendering**Headless browser

 

 

 

## +10-30

 credits---

 **ASP** Anti-scraping protection [](https://scrapfly.io/docs/scrape-api/anti-scraping-protection#pricing)

 

 

 

 

## +25

 credits---

 **Residential proxies** High success rate [](https://scrapfly.io/docs/scrape-api/proxy#pricing)

 

 

 

## +5

 credits---

 **Screenshots**Each capture

 

 

 

 

  **Cost Optimization Tips:**- Use `web_get_page` for simple requests instead of `web_scrape`
- Start with datacenter proxies, escalate to residential only if needed [](https://scrapfly.io/docs/scrape-api/proxy#proxy-pools)
- Disable `render_js` for static pages
- Use caching for frequently accessed pages [](https://scrapfly.io/docs/scrape-api/cache)
- Check `scraping_instruction_enhanced` for optimal configurations
 
 

 



 

 

## Next Steps

- [See real-world examples](https://scrapfly.io/docs/mcp/examples) using these tools
- [Set up authentication](https://scrapfly.io/docs/mcp/authentication) for your MCP client
- [Learn about the underlying Scrape API](https://scrapfly.io/docs/scrape-api/getting-started)
- [Learn about Native Browser MCP](https://scrapfly.io/docs/cloud-browser-api/mcp) and WebMCP tool discovery
- [Read the FAQ](https://scrapfly.io/docs/mcp/faq) for common questions