  // PRODUCT# The Best Browser MCP for AI Agents

 Give your AI agent Scrapfly's full scraping infrastructure with our hosted MCP server. Connect Claude Desktop, Cursor, Cline, Windsurf, and 9 other MCP clients with one URL. Zero boilerplate, zero local server install.

##  One URL to connect any agent. 5 tools covering every scraping use case. 

- **Works with every MCP client.** Claude Desktop, Cursor, Cline, Windsurf, Zed, Roo Code, VS Code, LangChain, CrewAI - no extra setup.
- **Hosted and managed by Scrapfly.** Anti-bot bypass, residential proxies, JavaScript rendering, and AI extraction, all accessible from natural language tool calls.
 
 [ Get Free API Key ](https://scrapfly.io/register) [ Developer Docs ](https://scrapfly.io/docs/mcp/getting-started) 

 1,000 free credits. No credit card required. 

 





'; var lanesEl = el.querySelector('[data-lanes]'); // Engine slug → marketing-source SVG (filled silhouette, FA-style). // The two icon URLs are pre-resolved by Twig's asset() helper and // stored on the wrapper as data-attrs, so the  picks up the // correct CDN-prefixed URL in prod and the same-origin URL in dev // without the JS having to know about it. var engineIcons = { CURLIUM: el.dataset.curliumIcon, SCRAPIUM: el.dataset.scrapiumIcon }; function engineIconSrc (engineName) { return engineIcons[engineName] || ''; } function buildLane (idx, initialStep) { var job = pickJob(); var laneEl = document.createElement('div'); laneEl.className = 'anim-scrape__lane'; laneEl.innerHTML = '' + '' + '' + job.country + '/' + job.pool + '' + '`' + job.path + '`' + '' + stageNames\[initialStep\] + '' + '' + '

' + '

'; return { el: laneEl, engineIconEl: laneEl.querySelector('[data-engine-icon]'), geoEl: laneEl.querySelector('[data-geo]'), pathEl: laneEl.querySelector('[data-path]'), stageEl: laneEl.querySelector('[data-stage-text]'), elapsedEl: laneEl.querySelector('[data-elapsed]'), barEl: laneEl.querySelector('[data-bar]'), step: initialStep, cumul: 0, job: job }; } // Three lanes, each starting at a different stage so they look // genuinely parallel from the first frame (no staggered fade-in). var lanes = [buildLane(0, 0), buildLane(1, 1), buildLane(2, 2)]; lanes.forEach(function (l) { lanesEl.appendChild(l.el); }); // Live RPS readout — sums lane completions over a rolling window. var statRpsEl = el.querySelector('[data-rps]'); var completionTimes = []; function updateRps () { var now = Date.now(); completionTimes = completionTimes.filter(function (t) { return now - t &lt; 1000; }); statRpsEl.textContent = completionTimes.length === 0 ? '—' : String(completionTimes.length * 3); } function tickLane (lane) { var s = lane.step; var deltas = lane.job.deltas; if (s &lt; deltas.length) { var d = deltas[s][0] + Math.random() * (deltas[s][1] - deltas[s][0]); lane.cumul += d; lane.stageEl.textContent = stageNames[s]; lane.elapsedEl.textContent = Math.round(lane.cumul) + 'ms'; } lane.barEl.setAttribute('data-progress', String(s)); lane.step++; if (lane.step &gt; deltas.length) { // Cycle complete — record completion, then re-roll the entire // job (path + engine + geo + pool) so each lane keeps showing // realistic Scrapfly variety rather than reusing the same // engine/path forever. completionTimes.push(Date.now()); lane.step = 0; lane.cumul = 0; lane.job = pickJob(); lane.pathEl.textContent = lane.job.path; lane.geoEl.textContent = lane.job.country + '/' + lane.job.pool; lane.engineIconEl.src = engineIconSrc(lane.job.engine); lane.engineIconEl.alt = lane.job.engine; lane.engineIconEl.title = lane.job.engine; lane.stageEl.textContent = stageNames[0]; lane.elapsedEl.textContent = ''; } } // Each lane ticks on its own interval — staggered so they don't // synchronize over time (~480ms ± per-lane jitter). var intervals = lanes.map(function (lane, i) { // Phase-stagger first tick by 160ms × lane index. setTimeout(function () { tickLane(lane); }, i * 160); return setInterval(function () { tickLane(lane); }, 460 + i * 40); }); var rpsInterval = setInterval(updateRps, 250); // Return the first interval so the existing teardown contract holds. // (No teardown is currently invoked, but stay symmetrical with the // other drivers that all return one setInterval handle.) void intervals; void rpsInterval; return intervals[0]; }, browser: function (el) { el.innerHTML = 'CDP EVENTS ' + '
'; var feed = el.querySelector('[data-feed]'); // Each event carries a realistic [minDelta, maxDelta] in ms — the // gap from the *previous* event in the same request flight. Numbers // mirror what Chrome DevTools shows on a real CDP trace: tens of ms // between network events, hundreds for DOM/load milestones, ~10ms // for Input dispatch. Hardcoded timestamps are dropped from detail // strings so the feed-time column is the single source of truth. var events = [ ['Network.requestWillBeSent', 'GET web-scraping.dev/abc', [15, 25]], ['Page.frameStartedLoading', 'frame=main', [5, 15]], ['Network.responseReceived', 'status=200, type=document', [40, 120]], ['Page.domContentEventFired', 'frame=main', [180, 320]], ['Runtime.executionContextCreated', 'origin=web-scraping.dev', [10, 25]], ['DOM.documentUpdated', 'nodes=1,284', [20, 60]], ['Page.loadEventFired', 'frame=main', [120, 240]], ['Network.dataReceived', '124.3 KB', [15, 45]], ['Input.dispatchMouseEvent', 'click (842, 316)', [5, 15]] ]; var i = 0; // tCdp is the simulated CDP clock in ms, NOT wall-clock time. It // resets at the start of each request flight (every full cycle of // events) so the feed reads as a fresh trace, not a 30-minute // session log. var tCdp = 0; function tick () { var idx = i % events.length; if (idx === 0) tCdp = 0; var ev = events[idx]; var jitter = ev[2][0] + Math.random() * (ev[2][1] - ev[2][0]); tCdp += jitter; var dt = Math.round(tCdp) + 'ms'; var li = document.createElement('li'); li.innerHTML = '' + dt + '' + '' + ev\[0\] + '' + '' + ev\[1\] + ''; feed.insertBefore(li, feed.firstChild); while (feed.children.length &gt; 6) feed.removeChild(feed.lastChild); i++; } for (var k = 0; k &lt; 4; k++) tick(); return setInterval(tick, 950); }, screenshot: function (el) { el.innerHTML = 'CAPTURING ' + '' + '' + '

' + '

' + '

' + '' + 'PNG' + 'JPEG' + 'WEBP' + 'FULL PAGE' + '

' + '

'; var fmts = el.querySelectorAll('[data-fmt]'); var shutter = el.querySelector('[data-shutter]'); var spec = el.querySelector('[data-spec]'); var elapsed = el.querySelector('[data-elapsed]'); // Each format combines a realistic viewport spec, capture latency, // and resulting payload size. Numbers cross-checked against // Scrapfly screenshot benchmarks: PNG/JPEG/WEBP on 1920×1080 land // 180-400ms; full-page on a long article scrolls + stitches and // takes 700-1200ms. var presets = [ { dim: '1920×1080', size: '184 KB', latencyMs: [180, 320] }, { dim: '1920×1080', size: '92 KB', latencyMs: [160, 260] }, { dim: '1920×1080', size: '76 KB', latencyMs: [200, 360] }, { dim: '1920×6840', size: '1.4 MB', latencyMs: [780, 1180] } ]; var step = 0; var anim = null; function tick () { var p = presets[step]; var latency = Math.round(p.latencyMs[0] + Math.random() * (p.latencyMs[1] - p.latencyMs[0])); spec.textContent = p.dim + ' • ' + p.size; elapsed.textContent = latency + 'ms'; // Web Animations API for the shutter sweep — replaces a CSS // transition + offsetWidth-reflow restart trick. Each tick we // cancel the previous animation and run a fresh one; WAAPI keeps // the work on the compositor thread, so no main-thread reflow. if (anim) anim.cancel(); anim = shutter.animate( [{ width: '0%' }, { width: '100%' }], { duration: latency, easing: 'cubic-bezier(.2,.8,.2,1)', fill: 'forwards' } ); fmts.forEach(function (f, i) { f.classList.toggle('anim-screenshot__format--active', i === step); }); step = (step + 1) % fmts.length; } tick(); return setInterval(tick, 1500); }, extract: function (el) { el.innerHTML = 'SCHEMA HYDRATION ' + '' + '{ name: \_\_\_\_\_\_\_\_\_\_\_\_,
' + ' price: \_\_\_\_\_\_\_\_\_\_\_\_,
' + ' in\_stock: \_\_\_\_,
' + ' rating: \_\_\_\_ }' + '

'; var records = [ { name: '"Widget Pro"', price: '49.99', in_stock: 'true', rating: '4.7' }, { name: '"Acme Runner"', price: '129.00', in_stock: 'true', rating: '4.3' }, { name: '"Vintage Chair"', price: '340.00', in_stock: 'false', rating: '4.9' }, { name: '"Coffee Grinder"', price: '89.50', in_stock: 'true', rating: '4.6' } ]; var keys = ['name', 'price', 'in_stock', 'rating']; var stat = el.querySelector('[data-stat]'); // Counter that ticks up each completed record so the panel reads // as "ongoing batch extraction" rather than a single shot demo. var totalRecords = 0; var rec = 0, step = 0; function tick () { var key = keys[step]; var field = el.querySelector('[data-field="' + key + '"]'); if (field) { field.textContent = records[rec % records.length][key]; field.className = 'v v-new'; } step++; if (step &gt;= keys.length) { step = 0; rec++; totalRecords++; if (stat) stat.textContent = totalRecords.toLocaleString('en-US') + ' records • ~340ms/rec'; setTimeout(function () { keys.forEach(function (k) { var f = el.querySelector('[data-field="' + k + '"]'); if (!f) return; f.textContent = k === 'in_stock' || k === 'rating' ? '____' : '____________'; f.className = 'pending'; }); }, 600); } } // Faster field reveal — 250ms feels like a template extraction // (regex/CSS), not a slow LLM dribble. Total per-record: ~1s. return setInterval(tick, 250); }, crawl: function (el) { el.innerHTML = '' + '**0 urls discovered**' + 'depth 1/5 • 0 req/s' + '

' + '```
web-scraping.dev/
```

'; var countEl = el.querySelector('[data-count]'); var depthEl = el.querySelector('[data-depth]'); var rpsEl = el.querySelector('[data-rps]'); var treeEl = el.querySelector('[data-tree]'); var branches = [ '├─ /products (1,284 pages)', '│ ├─ /products/shoes (392)', '│ ├─ /products/bags (218)', '│ └─ /products/accessories (674)', '├─ /articles (3,902 pages)', '│ ├─ /articles/2024/', '│ └─ /articles/2025/', '├─ /reviews (8,401)', '└─ /sitemap.xml' ]; // Counter starts plausible, climbs by realistic-per-tick batches // (~10 req/s sustained = 65/tick at 650ms cadence; we vary per // tick to read as live discovery rather than a clock). var count = 1, branchIdx = 0, depth = 1; function tick () { var batch = 50 + Math.floor(Math.random() * 60); count += batch; countEl.textContent = count.toLocaleString('en-US'); // RPS oscillates around 8-15 — the typical Scrapfly crawler // throttle for a single seed under default politeness. rpsEl.textContent = String(8 + Math.floor(Math.random() * 8)); if (branchIdx &lt; branches.length) { treeEl.innerHTML += '\n' + branches[branchIdx]; branchIdx++; depth = Math.min(5, 1 + Math.floor(branchIdx / 2)); depthEl.textContent = String(depth); } else { setTimeout(function () { treeEl.innerHTML = 'web-scraping.dev/'; branchIdx = 0; depth = 1; count = 1; depthEl.textContent = '1'; countEl.textContent = '1'; }, 1400); branchIdx = branches.length + 1; } } return setInterval(tick, 650); } }; document.querySelectorAll('[data-hero-anim]').forEach(function (el) { var kind = el.getAttribute('data-hero-anim'); var driver = drivers[kind]; if (driver) driver(el); }); })(); [  Try in Browser Playground ](https://scrapfly.io/products/mcp-cloud/playground) 

 

 

---

## 10+

MCP-compatible clients supported

 



 

## 5

scraping tools exposed via MCP

 



 

## 0

local processes to run or maintain

 



 

## 5B+

scrapes/month powering the same infrastructure

 



 

 

 

---

 CAPABILITIES## Everything Your Agent Needs to Scrape the Web

Scrape, screenshot, extract, and crawl - all via natural language tool calls. No parsing, no proxies to manage, no anti-bot guesswork.

 

 ### From Agent Prompt to Scraped Data

Your agent calls a tool. Scrapfly handles the rest. The MCP protocol carries the tool call from any client (Claude, Cursor, LangChain) to the Scrapfly cloud where scraping, anti-bot bypass, JS rendering, and proxy routing run server-side. The result travels back through MCP to your agent in clean markdown or structured JSON. No local server, no proxy config, no browser to launch.

  **AI Agent / MCP Client** Claude, Cursor, Cline, Windsurf, LangChain, CrewAI - any MCP-compatible client 

 

  **MCP Protocol** JSON-RPC 2.0 over streamable HTTP transport, tool discovery via tools/list 

 

  **Scrapfly Tool Registry** web\_get\_page, web\_scrape, screenshot, info\_account, scraping\_instruction 

 

  **Scrapfly Execution Layer** anti-bot bypass, JS rendering, proxy rotation, challenge solving - server-side 

 

  **Target Website** sees a real browser from a real residential IP, not a bot 

 

  **Result to Agent** clean markdown or structured JSON, plus log\_url for every call 

 

 

  **5 tools** in the registry 

  **10+ clients** supported 

  **0 local** processes needed 

  **1 API key** shared with direct API 

 

[View MCP tool reference →](https://scrapfly.io/docs/mcp/tools)

 



 

 

 ### Scraping Tools - Full Control or Zero Config

`web_get_page` is the fast path: ask your agent to fetch a page and receive clean markdown with sensible defaults applied. `web_scrape` exposes the full Scrape API surface: JS rendering, anti-bot bypass, proxy pool selection, custom headers, POST bodies, and session management. Both return clean markdown. `web_scrape` also returns `browser_data` (XHR calls, localStorage, WebSocket frames) when JS rendering is active.

  **web\_get\_page** zero-config fetch 

  **web\_scrape** full-parameter control 

  **asp=true** anti-bot bypass 

 

render\_js

residential proxies

190+ countries

custom headers

POST bodies

session sticky

 

[View web\_scrape reference →](https://scrapfly.io/docs/mcp/tools)

 



 

 ### screenshot - Visual Page Capture

Give your agent eyes. Capture full-page or viewport screenshots of any URL, including JavaScript-rendered pages. Useful for visual verification, layout monitoring, or building design agents that need to inspect a page visually before acting.

  **full-page** or viewport 

  **dark mode** supported 

  **custom** resolution 

 

JS-rendered pages

base64 output

proxy-routed

anti-bot bypassed

 

[View Screenshot API →](https://scrapfly.io/products/screenshot-api)

 



 

 

 ### Anti-Bot Bypass on Every Tool Call

Every scraping tool call passes through Scrapfly's anti-bot stack. Pass `asp=true` via `web_scrape` to activate full bypass. The execution layer handles TLS fingerprinting, HTTP/2 SETTINGS frames, behavioral signals, challenge solving, and proxy pool selection. Failed challenge retries do not cost credits.

  **TLS + HTTP/2** fingerprint 

  **Free retries** on challenge fail 

  **Solver** server-side 

 

[Cloudflare](https://scrapfly.io/bypass/cloudflare)

[DataDome](https://scrapfly.io/bypass/datadome)

[Akamai](https://scrapfly.io/bypass/akamai)

[Kasada](https://scrapfly.io/bypass/kasada)

 

[View all bypass targets →](https://scrapfly.io/bypass)

 



 

 ### Full Observability on Every Call

Every scraping tool call returns a `log_url` in the tool result. Open it in the Scrapfly dashboard to inspect the full request: response body, response headers, rendered HTML, captured screenshot, and HAR network waterfall. Replay the exact request with one click. Credit cost is reported per call so agents can account for usage programmatically via `info_account`.

  **log\_url** in every result 

  **HAR** network waterfall 

  **1-click** request replay 

  **info\_account** live credit balance 

 

response body

rendered HTML

response headers

screenshot capture

 

[View MCP tool docs →](https://scrapfly.io/docs/mcp/tools)

 



 

 

 ### Hosted, Not Local

No npm package to maintain, no local server to keep running. Scrapfly runs the MCP server. Add the URL to your client config and connect. Updates deploy on Scrapfly's side without any action required from you.

**0 setup**steps locally

 

 



 

 ### One API Key

No separate credential for MCP. Pass your existing Scrapfly API key in the MCP server URL. Credits draw from the same balance as direct API usage. No separate billing tier for MCP access.

**shared**credit pool

 

 



 

 ### Persistent Browser Sessions

Agent workflows often need to stay logged in across multiple tool calls. MCP Cloud supports persistent browser sessions: the browser state (cookies, localStorage, auth tokens) survives across calls within the session window. An agent can log in on one tool call and access authenticated pages on the next without re-authenticating. Session scope is controlled via the `session` parameter on `web_scrape`.

  **Cookies** persist across calls 

  **Auth state** survives tool calls 

  **localStorage** shared in session 

 

[View session docs →](https://scrapfly.io/docs/mcp/tools)

 



 

 

 ### One URL. Every MCP Client.

Add a single hosted URL to your MCP client config and you're connected. No npm package to install locally, no running processes, no version pinning. Setup guides for every supported client are in the integration docs.

  **AI desktops** Claude, ChatGPT 

  **AI-native IDEs** Cursor, Cline, Windsurf 

  **Frameworks** LangChain, CrewAI 

  **Automation** n8n, Make, Zapier 

 

 [Claude Desktop](https://scrapfly.io/docs/mcp/integrations/claude-desktop) 

 [Cursor](https://scrapfly.io/docs/mcp/integrations/cursor) 

 [Cline](https://scrapfly.io/docs/mcp/integrations/cline) 

 [Windsurf](https://scrapfly.io/docs/mcp/integrations/windsurf) 

 [Zed](https://scrapfly.io/docs/mcp/integrations/zed) 

 [Roo Code](https://scrapfly.io/docs/mcp/integrations/roo-code) 

 [VS Code](https://scrapfly.io/docs/mcp/integrations/vscode) 

 [+ more](https://scrapfly.io/docs/mcp/integrations) 

 

[View all integration guides →](https://scrapfly.io/docs/mcp/integrations)

 



 

 

 ### Agent Framework Ready

LangChain, LlamaIndex, CrewAI, and any Python or TypeScript framework that calls MCP tools programmatically can connect directly. Write agentic pipelines that scrape, extract, and act on web data without managing any scraping infrastructure. Scrapfly's native Python and TypeScript SDKs are also available for direct API access outside of MCP if you need lower-level control.

  **LangChain** Python + JS 

  **LlamaIndex** Python 

  **CrewAI** multi-agent 

 

[LangChain guide](https://scrapfly.io/docs/mcp/integrations/langchain)

[LlamaIndex guide](https://scrapfly.io/docs/mcp/integrations/llamaindex)

[CrewAI guide](https://scrapfly.io/docs/mcp/integrations/crewai)

 

[View all framework docs →](https://scrapfly.io/docs/mcp/integrations)

 



 

 ### Related Products

MCP Cloud is the agent-native interface. The underlying APIs are also available directly for non-agent use cases or when you need full parameter control.

 [ **AI Browser Agent** Natural-language browser automation - click, fill, navigate ](https://scrapfly.io/products/ai-browser-agent) 

 [ **Cloud Browser API** CDP access to stealth Chromium via Playwright, Puppeteer, Selenium ](https://scrapfly.io/products/cloud-browser-api) 

 [ **Web Scraping API** Direct API for full-parameter scraping outside of MCP ](https://scrapfly.io/products/web-scraping-api) 

 [ **Scrapfly CLI** Command-line interface for scraping, testing, and automation ](https://scrapfly.io/products/scrapfly-cli) 

 

 



 

 

 

---

 CODE## Wire MCP into Any Client

Hosted MCP server, compatible with Claude Desktop, Cursor, Cline, LangChain, CrewAI and any other MCP-capable client.

 

 [ Claude Desktop ](#mcp-strat-claude-desktop) [ Cursor / Windsurf / Cline ](#mcp-strat-cursor) [ LangChain ](#mcp-strat-langchain) [ CrewAI ](#mcp-strat-crewai) 

Add one JSON block to your Claude Desktop config.

     Config  

  

 ```
# ~/.config/claude/claude_desktop_config.json
{
  "mcpServers": {
    "scrapfly": {
      "command": "npx",
      "args": [
        "mcp-remote",
        "https://mcp.scrapfly.io/mcp?key=YOUR_API_KEY"
      ]
    }
  }
}
```

 

 

 [ Claude Desktop setup → ](https://scrapfly.io/docs/mcp/integrations/claude-desktop) 

 

Same hosted MCP URL plugs into every MCP-capable IDE agent.

     Config  

  

 ```
# Cursor / Windsurf / Cline — add to your IDE MCP config
{
  "mcpServers": {
    "scrapfly": {
      "command": "npx",
      "args": [
        "mcp-remote",
        "https://mcp.scrapfly.io/mcp?key=YOUR_API_KEY"
      ]
    }
  }
}
```

 

 

 [ Cursor / IDE setup → ](https://scrapfly.io/docs/mcp/integrations/cursor) 

 

MCP tool-use from a LangChain agent loop.

     Python TypeScript  

   

 ```
from langchain_community.document_loaders import ScrapflyLoader

loader = ScrapflyLoader(
    urls=["https://web-scraping.dev/products"],
    api_key="YOUR_API_KEY",
    scrape_config={
        "asp": True,
        "render_js": True,
        "proxy_pool": "public_residential_pool",
        "country": "us",
    },
    scrape_format="markdown",
)

documents = loader.load()
for doc in documents:
    print(doc.page_content)
```

 ```
import { ScrapflyLoader } from '@langchain/community/document_loaders/web/scrapfly';

const loader = new ScrapflyLoader('https://web-scraping.dev/products', 'YOUR_API_KEY', {
  scrapeConfig: {
    asp: true,
    render_js: true,
    proxy_pool: 'public_residential_pool',
    country: 'us',
  },
  scrapeFormat: 'markdown',
});

const documents = await loader.load();
for (const doc of documents) {
  console.log(doc.pageContent);
}
```

 

 

 [ LangChain MCP setup → ](https://scrapfly.io/docs/mcp/integrations/langchain) [ LangChain MCP setup → ](https://scrapfly.io/docs/mcp/integrations/langchain) 

 

Expose Scrapfly MCP tools to a CrewAI crew.

     Python  

  

 ```
from crewai import Agent, Task, Crew
from crewai_tools import ScrapflyScrapeWebsiteTool

scrape_tool = ScrapflyScrapeWebsiteTool(api_key="YOUR_API_KEY")

researcher = Agent(
    role="Web Researcher",
    goal="Extract product data from e-commerce pages",
    backstory="Expert web data extractor.",
    tools=[scrape_tool],
)

task = Task(
    description="Extract top products from https://web-scraping.dev/products",
    expected_output="Structured list of products with names and prices.",
    agent=researcher,
)

crew = Crew(agents=[researcher], tasks=[task])
result = crew.kickoff()
```

 

 

 [ CrewAI MCP setup → ](https://scrapfly.io/docs/mcp/integrations/crewai) 

 

 

 

---

 LEARN## Docs, Examples, And Integration Guides

Step-by-step setup guides for every MCP client, framework, and automation platform.

 

 ### MCP Reference

Full documentation for all 5 MCP tools, parameters, and response shapes.

 [ Developer Docs → ](https://scrapfly.io/docs/mcp/tools) 



 

 ### Getting Started

Connect your first MCP client to Scrapfly in under 5 minutes.

 [ Start setup → ](https://scrapfly.io/docs/mcp/getting-started) 



 

 ### Integration Guides

Claude, Cursor, Cline, Windsurf, n8n, LangChain, CrewAI, and more.

 [ See all integrations → ](https://scrapfly.io/docs/mcp/integrations) 



 

 ### Examples

Real-world MCP scraping examples: product data, news, leads, SERP.

 [ Browse examples → ](https://scrapfly.io/docs/mcp/examples) 



 

 

 

---

  // INTEGRATIONS## Connect any MCP-compatible client

Claude Desktop, Cursor, Cline, Windsurf, Zed, Roo Code, VS Code, ChatGPT and any MCP-native agent framework. One URL, zero local setup.

 ### AI desktops

 [  Claude Desktop ](https://scrapfly.io/docs/mcp/integrations/claude-desktop) [  ChatGPT ](https://scrapfly.io/docs/mcp/integrations/chatgpt) [  Claude Code ](https://scrapfly.io/docs/mcp/integrations/claude-code) 

 

### AI-native IDEs

 [  Cursor ](https://scrapfly.io/docs/mcp/integrations/cursor) [  Cline ](https://scrapfly.io/docs/mcp/integrations/cline) [  Windsurf ](https://scrapfly.io/docs/mcp/integrations/windsurf) [  Zed ](https://scrapfly.io/docs/mcp/integrations/zed) [  Roo Code ](https://scrapfly.io/docs/mcp/integrations/roo-code) [  VS Code ](https://scrapfly.io/docs/mcp/integrations/vscode) 

 

### Agent frameworks

 [  LangChain ](https://scrapfly.io/integration/langchain) [  LlamaIndex ](https://scrapfly.io/integration/llamaindex) [  CrewAI ](https://scrapfly.io/integration/crewai) [  Vapi ](https://scrapfly.io/docs/mcp/integrations/vapi) [  Agent Builder ](https://scrapfly.io/docs/mcp/integrations/agent-builder) 

 

 

 [ See all integrations  ](https://scrapfly.io/integration) 

 

---

  FAQ## Frequently Asked Questions

 

  ### What is Scrapfly MCP Cloud?

 Scrapfly MCP Cloud is a hosted Model Context Protocol (MCP) server that exposes Scrapfly's scraping infrastructure as tools your AI agent can call. Configure it once with a single URL and your API key. From that point, your agent (Claude, Cursor, Cline, LangChain, or any MCP-compatible client) can scrape pages, take screenshots, extract structured data, and check account usage, all via natural language tool calls.

 

   ### Do I need to run anything locally?

 No. Unlike a self-hosted MCP server, Scrapfly MCP Cloud runs entirely on Scrapfly's infrastructure. You add the MCP URL to your client config, and that's it. No npm package to keep updated, no local process to start, no port to expose.

 

   ### Which MCP clients are supported?

 Any client that implements the Model Context Protocol standard. Verified integrations include Claude Desktop, Claude Code, Cursor, Cline, Windsurf, Zed, Roo Code, VS Code, and automation platforms like n8n, Make, and Zapier. Framework integrations include LangChain, LlamaIndex, and CrewAI. Full setup guides for each are in [the integrations docs](https://scrapfly.io/"/docs/mcp/integrations").

 

   ### What tools does the MCP server expose?

 Five tools: `web_get_page` for fast zero-config page retrieval, `web_scrape` for full-control scraping with JS rendering and anti-bot bypass, `screenshot` for visual page capture, `info_account` for real-time credit and usage data, and `scraping_instruction` which gives the agent best-practice guidance for complex targets.

 

   ### Does it share credits with my regular Scrapfly usage?

 Yes. MCP Cloud uses your existing Scrapfly API key, so all requests draw from the same credit balance as any direct API call. There is no separate billing tier for MCP. Credits are consumed only for successful scrapes, same as the rest of the platform.

 

   ### Can I debug MCP scrape failures?

 Yes. Every tool call that triggers a scrape returns a `log_url` in the result. Open it in the Scrapfly dashboard to inspect the full request: response body, cookies, rendered HTML, screenshots, and HAR waterfall. You can also replay the exact request with one click.

 

   ### Can I use MCP Cloud in automated pipelines, not just chat?

 Absolutely. MCP Cloud is designed for both interactive use (chat with Claude, coding with Cursor) and fully automated pipelines via LangChain, CrewAI, LlamaIndex, n8n, or any framework that calls MCP tools programmatically. Scheduled, trigger-based, and always-on workflows are all supported.

 

  

 

  ---

 // PRICING## Transparent, usage-based pricing

One plan covers the full Scrapfly platform. Pick a monthly credit budget; every API shares the same credit pool. No per-product lock-in, no surprise line items.

 

  **Free tier**1,000 free credits on signup. No credit card required.

 

 

  **Pay on success**You only pay for successful requests. Failed calls are free.

 

 

  **No lock-in**Upgrade, downgrade, or cancel anytime. No contract.

 

 

 

 [ See pricing  ](https://scrapfly.io/pricing) [ Start free ](https://scrapfly.io/register) 

 

 

### Want direct API access without MCP?

 MCP Cloud is the agent-native interface. Every tool it exposes maps to a direct API: [Web Scraping API](https://scrapfly.io/products/web-scraping-api) for full-parameter control with [anti-bot bypass](https://scrapfly.io/bypass), [Extraction API](https://scrapfly.io/products/extraction-api) for structured JSON, [Screenshot API](https://scrapfly.io/products/screenshot-api) for visual capture, [Crawler API](https://scrapfly.io/products/crawler-api) for multi-page crawls, [Browser API](https://scrapfly.io/products/cloud-browser-api) for CDP. Under the hood: [Scrapium](https://scrapfly.io/scrapium) stealth Chromium and [Curlium](https://scrapfly.io/curlium) byte-perfect HTTP. For browser-use / Stagehand agents, see [AI Browser Agent](https://scrapfly.io/products/ai-browser-agent).

 

 [Get Free API Key](https://scrapfly.io/register)1,000 free credits. No card.