  // PRODUCT# The Best AI Browser Agent

 Automate any website with Scrapfly's AI browser agent. Give a goal in natural language and get a stealth headless Chrome agent that plans, clicks, types, and extracts. No selectors to write or maintain.

##  Plan and execute from a single prompt. Self-healing when the DOM changes. 

- **JSON output by design.** The agent returns structured data - not raw HTML, not screenshots - so your pipeline stays clean.
- **Runs on Scrapium stealth Chromium.** Your agent keeps moving through sites that block ordinary headless browsers.
 
 [ Get Free API Key ](https://scrapfly.io/register) [ Developer Docs ](https://scrapfly.io/docs/cloud-browser-api/getting-started) 

 1,000 free credits. No credit card required. 

 





'; var lanesEl = el.querySelector('[data-lanes]'); // Engine slug → marketing-source SVG (filled silhouette, FA-style). // The two icon URLs are pre-resolved by Twig's asset() helper and // stored on the wrapper as data-attrs, so the  picks up the // correct CDN-prefixed URL in prod and the same-origin URL in dev // without the JS having to know about it. var engineIcons = { CURLIUM: el.dataset.curliumIcon, SCRAPIUM: el.dataset.scrapiumIcon }; function engineIconSrc (engineName) { return engineIcons[engineName] || ''; } function buildLane (idx, initialStep) { var job = pickJob(); var laneEl = document.createElement('div'); laneEl.className = 'anim-scrape__lane'; laneEl.innerHTML = '' + '' + '' + job.country + '/' + job.pool + '' + '`' + job.path + '`' + '' + stageNames\[initialStep\] + '' + '' + '

' + '

'; return { el: laneEl, engineIconEl: laneEl.querySelector('[data-engine-icon]'), geoEl: laneEl.querySelector('[data-geo]'), pathEl: laneEl.querySelector('[data-path]'), stageEl: laneEl.querySelector('[data-stage-text]'), elapsedEl: laneEl.querySelector('[data-elapsed]'), barEl: laneEl.querySelector('[data-bar]'), step: initialStep, cumul: 0, job: job }; } // Three lanes, each starting at a different stage so they look // genuinely parallel from the first frame (no staggered fade-in). var lanes = [buildLane(0, 0), buildLane(1, 1), buildLane(2, 2)]; lanes.forEach(function (l) { lanesEl.appendChild(l.el); }); // Live RPS readout — sums lane completions over a rolling window. var statRpsEl = el.querySelector('[data-rps]'); var completionTimes = []; function updateRps () { var now = Date.now(); completionTimes = completionTimes.filter(function (t) { return now - t &lt; 1000; }); statRpsEl.textContent = completionTimes.length === 0 ? '—' : String(completionTimes.length * 3); } function tickLane (lane) { var s = lane.step; var deltas = lane.job.deltas; if (s &lt; deltas.length) { var d = deltas[s][0] + Math.random() * (deltas[s][1] - deltas[s][0]); lane.cumul += d; lane.stageEl.textContent = stageNames[s]; lane.elapsedEl.textContent = Math.round(lane.cumul) + 'ms'; } lane.barEl.setAttribute('data-progress', String(s)); lane.step++; if (lane.step &gt; deltas.length) { // Cycle complete — record completion, then re-roll the entire // job (path + engine + geo + pool) so each lane keeps showing // realistic Scrapfly variety rather than reusing the same // engine/path forever. completionTimes.push(Date.now()); lane.step = 0; lane.cumul = 0; lane.job = pickJob(); lane.pathEl.textContent = lane.job.path; lane.geoEl.textContent = lane.job.country + '/' + lane.job.pool; lane.engineIconEl.src = engineIconSrc(lane.job.engine); lane.engineIconEl.alt = lane.job.engine; lane.engineIconEl.title = lane.job.engine; lane.stageEl.textContent = stageNames[0]; lane.elapsedEl.textContent = ''; } } // Each lane ticks on its own interval — staggered so they don't // synchronize over time (~480ms ± per-lane jitter). var intervals = lanes.map(function (lane, i) { // Phase-stagger first tick by 160ms × lane index. setTimeout(function () { tickLane(lane); }, i * 160); return setInterval(function () { tickLane(lane); }, 460 + i * 40); }); var rpsInterval = setInterval(updateRps, 250); // Return the first interval so the existing teardown contract holds. // (No teardown is currently invoked, but stay symmetrical with the // other drivers that all return one setInterval handle.) void intervals; void rpsInterval; return intervals[0]; }, browser: function (el) { el.innerHTML = 'CDP EVENTS ' + '
'; var feed = el.querySelector('[data-feed]'); // Each event carries a realistic [minDelta, maxDelta] in ms — the // gap from the *previous* event in the same request flight. Numbers // mirror what Chrome DevTools shows on a real CDP trace: tens of ms // between network events, hundreds for DOM/load milestones, ~10ms // for Input dispatch. Hardcoded timestamps are dropped from detail // strings so the feed-time column is the single source of truth. var events = [ ['Network.requestWillBeSent', 'GET web-scraping.dev/abc', [15, 25]], ['Page.frameStartedLoading', 'frame=main', [5, 15]], ['Network.responseReceived', 'status=200, type=document', [40, 120]], ['Page.domContentEventFired', 'frame=main', [180, 320]], ['Runtime.executionContextCreated', 'origin=web-scraping.dev', [10, 25]], ['DOM.documentUpdated', 'nodes=1,284', [20, 60]], ['Page.loadEventFired', 'frame=main', [120, 240]], ['Network.dataReceived', '124.3 KB', [15, 45]], ['Input.dispatchMouseEvent', 'click (842, 316)', [5, 15]] ]; var i = 0; // tCdp is the simulated CDP clock in ms, NOT wall-clock time. It // resets at the start of each request flight (every full cycle of // events) so the feed reads as a fresh trace, not a 30-minute // session log. var tCdp = 0; function tick () { var idx = i % events.length; if (idx === 0) tCdp = 0; var ev = events[idx]; var jitter = ev[2][0] + Math.random() * (ev[2][1] - ev[2][0]); tCdp += jitter; var dt = Math.round(tCdp) + 'ms'; var li = document.createElement('li'); li.innerHTML = '' + dt + '' + '' + ev\[0\] + '' + '' + ev\[1\] + ''; feed.insertBefore(li, feed.firstChild); while (feed.children.length &gt; 6) feed.removeChild(feed.lastChild); i++; } for (var k = 0; k &lt; 4; k++) tick(); return setInterval(tick, 950); }, screenshot: function (el) { el.innerHTML = 'CAPTURING ' + '' + '' + '

' + '

' + '

' + '' + 'PNG' + 'JPEG' + 'WEBP' + 'FULL PAGE' + '

' + '

'; var fmts = el.querySelectorAll('[data-fmt]'); var shutter = el.querySelector('[data-shutter]'); var spec = el.querySelector('[data-spec]'); var elapsed = el.querySelector('[data-elapsed]'); // Each format combines a realistic viewport spec, capture latency, // and resulting payload size. Numbers cross-checked against // Scrapfly screenshot benchmarks: PNG/JPEG/WEBP on 1920×1080 land // 180-400ms; full-page on a long article scrolls + stitches and // takes 700-1200ms. var presets = [ { dim: '1920×1080', size: '184 KB', latencyMs: [180, 320] }, { dim: '1920×1080', size: '92 KB', latencyMs: [160, 260] }, { dim: '1920×1080', size: '76 KB', latencyMs: [200, 360] }, { dim: '1920×6840', size: '1.4 MB', latencyMs: [780, 1180] } ]; var step = 0; var anim = null; function tick () { var p = presets[step]; var latency = Math.round(p.latencyMs[0] + Math.random() * (p.latencyMs[1] - p.latencyMs[0])); spec.textContent = p.dim + ' • ' + p.size; elapsed.textContent = latency + 'ms'; // Web Animations API for the shutter sweep — replaces a CSS // transition + offsetWidth-reflow restart trick. Each tick we // cancel the previous animation and run a fresh one; WAAPI keeps // the work on the compositor thread, so no main-thread reflow. if (anim) anim.cancel(); anim = shutter.animate( [{ width: '0%' }, { width: '100%' }], { duration: latency, easing: 'cubic-bezier(.2,.8,.2,1)', fill: 'forwards' } ); fmts.forEach(function (f, i) { f.classList.toggle('anim-screenshot__format--active', i === step); }); step = (step + 1) % fmts.length; } tick(); return setInterval(tick, 1500); }, extract: function (el) { el.innerHTML = 'SCHEMA HYDRATION ' + '' + '{ name: \_\_\_\_\_\_\_\_\_\_\_\_,
' + ' price: \_\_\_\_\_\_\_\_\_\_\_\_,
' + ' in\_stock: \_\_\_\_,
' + ' rating: \_\_\_\_ }' + '

'; var records = [ { name: '"Widget Pro"', price: '49.99', in_stock: 'true', rating: '4.7' }, { name: '"Acme Runner"', price: '129.00', in_stock: 'true', rating: '4.3' }, { name: '"Vintage Chair"', price: '340.00', in_stock: 'false', rating: '4.9' }, { name: '"Coffee Grinder"', price: '89.50', in_stock: 'true', rating: '4.6' } ]; var keys = ['name', 'price', 'in_stock', 'rating']; var stat = el.querySelector('[data-stat]'); // Counter that ticks up each completed record so the panel reads // as "ongoing batch extraction" rather than a single shot demo. var totalRecords = 0; var rec = 0, step = 0; function tick () { var key = keys[step]; var field = el.querySelector('[data-field="' + key + '"]'); if (field) { field.textContent = records[rec % records.length][key]; field.className = 'v v-new'; } step++; if (step &gt;= keys.length) { step = 0; rec++; totalRecords++; if (stat) stat.textContent = totalRecords.toLocaleString('en-US') + ' records • ~340ms/rec'; setTimeout(function () { keys.forEach(function (k) { var f = el.querySelector('[data-field="' + k + '"]'); if (!f) return; f.textContent = k === 'in_stock' || k === 'rating' ? '____' : '____________'; f.className = 'pending'; }); }, 600); } } // Faster field reveal — 250ms feels like a template extraction // (regex/CSS), not a slow LLM dribble. Total per-record: ~1s. return setInterval(tick, 250); }, crawl: function (el) { el.innerHTML = '' + '**0 urls discovered**' + 'depth 1/5 • 0 req/s' + '

' + '```
web-scraping.dev/
```

'; var countEl = el.querySelector('[data-count]'); var depthEl = el.querySelector('[data-depth]'); var rpsEl = el.querySelector('[data-rps]'); var treeEl = el.querySelector('[data-tree]'); var branches = [ '├─ /products (1,284 pages)', '│ ├─ /products/shoes (392)', '│ ├─ /products/bags (218)', '│ └─ /products/accessories (674)', '├─ /articles (3,902 pages)', '│ ├─ /articles/2024/', '│ └─ /articles/2025/', '├─ /reviews (8,401)', '└─ /sitemap.xml' ]; // Counter starts plausible, climbs by realistic-per-tick batches // (~10 req/s sustained = 65/tick at 650ms cadence; we vary per // tick to read as live discovery rather than a clock). var count = 1, branchIdx = 0, depth = 1; function tick () { var batch = 50 + Math.floor(Math.random() * 60); count += batch; countEl.textContent = count.toLocaleString('en-US'); // RPS oscillates around 8-15 — the typical Scrapfly crawler // throttle for a single seed under default politeness. rpsEl.textContent = String(8 + Math.floor(Math.random() * 8)); if (branchIdx &lt; branches.length) { treeEl.innerHTML += '\n' + branches[branchIdx]; branchIdx++; depth = Math.min(5, 1 + Math.floor(branchIdx / 2)); depthEl.textContent = String(depth); } else { setTimeout(function () { treeEl.innerHTML = 'web-scraping.dev/'; branchIdx = 0; depth = 1; count = 1; depthEl.textContent = '1'; countEl.textContent = '1'; }, 1400); branchIdx = branches.length + 1; } } return setInterval(tick, 650); } }; document.querySelectorAll('[data-hero-anim]').forEach(function (el) { var kind = el.getAttribute('data-hero-anim'); var driver = drivers[kind]; if (driver) driver(el); }); })(); 

 

 

---

## 3

frameworks supported out of the box

 



 

## 0

selectors to write or maintain

 



 

## &lt;1s

browser startup time on demand

 



 

## JSON

structured output, not raw HTML

 



 

 

 

---

 CAPABILITIES## One Goal. The Agent Figures Out the Rest.

Plan-and-execute loop, self-healing, JSON output, stealth browser, session reuse, and full observability.

 

 ### Agent Pipeline - From Instruction to Result

You give a goal in plain English. The agent reasons through what needs to happen, builds an action plan, executes it inside a stealth browser, checks its own work, and hands you structured JSON. No selectors to write. No brittle scripts to maintain.

  **Instruction** natural language goal - "log in and download the monthly invoice" 

 

  **Reasoning (LLM)** vision + DOM + accessibility tree - the model reads the page like a human 

 

  **Action Plan** navigate, click, fill, scroll, wait - broken into discrete verifiable steps 

 

  **Browser Execution** Scrapium stealth Chromium - coherent fingerprints, residential proxies, puzzle-captcha solved server-side 

 

  **Verification** agent inspects result, replans on partial failure, retries without extra cost 

 

  **Result** structured JSON + screenshots + full trace log - pipeline-ready, no post-processing needed 

 

 

  **Click** buttons, links, checkboxes 

  **Fill** forms, search, login 

  **Extract** implicit schema from context 

  **Navigate** multi-page, conditional paths 

 

natural language goal

multi-step reasoning

automatic replanning

JSON structured output

 

 



 

 

 ### Stealth Browser Baked In

The agent runs inside Scrapium, Scrapfly's patched stealth Chromium. TLS, HTTP/2, Canvas, WebGL, AudioContext, and Navigator signals form a coherent fingerprint. Anti-bot systems see a real desktop browser from a real location, not a headless bot.

  **4,000+** signals patched 

  **190+** proxy countries 

  **puzzle captcha** solved server-side 

 

  **JA3/JA4** TLS fingerprint 

  **Canvas + WebGL** GPU coherence 

  **Behavioral** mouse + scroll timing 

  **Timezone** aligned to proxy exit 

 

[Cloudflare](https://scrapfly.io/bypass/cloudflare)

[DataDome](https://scrapfly.io/bypass/datadome)

[Akamai](https://scrapfly.io/bypass/akamai)

[PerimeterX](https://scrapfly.io/bypass/perimeterx)

 

[View Scrapium docs →](https://scrapfly.io/scrapium)

 



 

 ### Self-Healing When the DOM Changes

When a modal blocks the path, a button moves, or a label changes, the agent detects the obstacle and adapts. Traditional selector scripts break silently. This agent tells you what it encountered and continues.

**adaptive**to DOM changes

**modal**aware

**zero**maintenance

 

---

 ### JSON Output by Design

The agent returns structured JSON - not raw HTML, not screenshots to OCR. Define the output schema in your prompt and receive clean, typed data ready for your pipeline.

schema-defined output

typed fields

no HTML parsing

pipeline-ready

 

 



 

 

 ### Connect Any Agent Framework

One CDP WebSocket URL. Browser Use, Stagehand, Vibium, or a raw CDP loop - they all connect the same way. Your LLM key, your model choice, Scrapfly provides the stealth browser runtime.

[Browser Use](https://scrapfly.io/docs/cloud-browser-api/browser-use)

[Stagehand](https://scrapfly.io/docs/cloud-browser-api/stagehand)

Vibium

[raw CDP](https://scrapfly.io/docs/cloud-browser-api/cdp-reference)

 

---

#### Agentic pipeline connections

[MCP Server](https://scrapfly.io/products/mcp-cloud)

[Cloud Browser API](https://scrapfly.io/docs/cloud-browser-api/getting-started)

[AI Web Scraping API](https://scrapfly.io/products/web-scraping-api)

[Extraction API](https://scrapfly.io/products/extraction-api)

 

 



 

 ### Full Trajectory Observability

Every agent run produces a `log_url`. Inspect the complete trace - every step, every DOM snapshot, every network request the browser made. Debug what the agent actually did, not what you expected it to do.

  **Step log** every action recorded 

  **Screenshots** at each step 

  **Network trace** every browser request 

 

---

 ### Persistent Sessions for Stateful Workflows

Log in once, reuse the session across agent runs. Cookies and local storage persist through the `session` parameter. Multi-step authenticated workflows stay coherent across invocations.

**cookies**persisted across runs

**localStorage**and IndexedDB

**no re-auth**on each invocation

 

 



 

 

 ### Auto-Scaling Browsers

Browsers spin up on demand and shut down when the task completes. No pool to pre-provision, no capacity planning. Run one agent or many concurrently.

**on-demand**spin-up

**idle**time is free

 

 



 

 ### Conditional Logic and Multi-Page Flows

The agent navigates paginated lists, follows conditional branches ("if the coupon field is visible, apply it"), and handles pop-ups, overlays, and redirects without manual scripting.

pagination

conditional paths

pop-up handling

redirect-aware

 

 



 

 ### Built for Agentic Pipelines

Plug into Claude, GPT, Gemini, or any custom agent loop via MCP or direct API. The browser agent acts as a tool your orchestrator calls when it needs to interact with a live website.

[View MCP Server docs →](https://scrapfly.io/products/mcp-cloud)

[View Cloud Browser API docs →](https://scrapfly.io/products/cloud-browser-api)

[View AI Web Scraping API docs →](https://scrapfly.io/products/web-scraping-api)

 

 



 

 

 ### Built for the Workflows Selectors Can't Handle

When a target changes layout, gates content behind login, or requires dynamic interaction across multiple pages, selector-based scrapers stall. The AI Browser Agent adapts to what's on screen rather than what you scripted.

login flows

form fills

checkout automation

research assistants

QA bots

multi-page navigation

data extraction

workflow automation

 

 



 

 

 

---

 CODE## Drive Autonomous Agents With Stealth

Works with browser-use, Stagehand, Vibium, or any CDP-compatible agent framework.

 

 [ browser-use (Python) ](#aiba-strat-browser-use) [ Stagehand (TypeScript) ](#aiba-strat-stagehand) [ Vibium (Python) ](#aiba-strat-vibium) 

Plan-and-execute agent loop with [Cloud Browser API](https://scrapfly.io/"/products/cloud-browser-api") under the hood.

     Python  

  

 ```
import asyncio

from browser_use import Agent, Browser, ChatBrowserUse
from scrapfly import BrowserConfig

# Build the CDP WebSocket URL via the Scrapfly Python SDK.
# `BrowserConfig` handles encoding, defaults, and the Cloud Browser host.
cdp_url = BrowserConfig(
    proxy_pool="residential",
    country="us",
    os="windows",
).websocket_url(api_key="YOUR_API_KEY")

# connect browser-use to Scrapfly's stealth Cloud Browser
browser = Browser(cdp_url=cdp_url)

agent = Agent(
    task="scrape the product data from https://web-scraping.dev/products",
    llm=ChatBrowserUse(),
    browser=browser,
)


async def main():
    await agent.run()


asyncio.run(main())
```

 

 

 [ Python SDK docs → ](https://scrapfly.io/docs/sdk/python) 

 

Deterministic + LLM hybrid agent. Uses Scrapfly CDP for stealth.

     TypeScript  

  

 ```
import { Stagehand } from "@browserbasehq/stagehand";
import { BrowserConfig } from "scrapfly-sdk";

async function main() {
  // Build the CDP WebSocket URL via the Scrapfly TypeScript SDK.
  // `BrowserConfig` handles encoding, defaults, and the Cloud Browser host.
  const cdpUrl = new BrowserConfig({
    proxy_pool: "residential",
    country: "us",
    os: "windows",
  }).websocketUrl("YOUR_API_KEY");

  // initialize Stagehand against Scrapfly's stealth Cloud Browser over CDP
  const stagehand = new Stagehand({
    env: "LOCAL",  // LOCAL = bring-your-own-CDP (Scrapfly), not Browserbase
    localBrowserLaunchOptions: { cdpUrl },
  });

  await stagehand.init();

  // create an agent with specific model configuration
  const agent = stagehand.agent({
    model: {
      modelName: "openai/computer-use-preview",
      apiKey: "YOUR_OPENAI_API_KEY",
    },
    systemPrompt: "You are an AI browser automation agent. Follow instructions precisely.",
  });

  const task = `
    Go to https://web-scraping.dev/products
    Extract the product names and prices
    Return the data as JSON
  `;

  const result = await agent.execute(task);
  console.log("Agent workflow result:", result);

  await stagehand.close();
}

main();
```

 

 

 [ TypeScript SDK docs → ](https://scrapfly.io/docs/sdk/typescript) 

 

Simpler agent API, extract-oriented. Works over the same Scrapfly CDP URL.

     Python  

  

 ```
from vibium import Browser
from scrapfly import BrowserConfig

# Build the CDP WebSocket URL via the Scrapfly Python SDK.
# `BrowserConfig` handles encoding, defaults, and the Cloud Browser host.
cdp_url = BrowserConfig(
    proxy_pool="datacenter",
).websocket_url(api_key="YOUR_API_KEY")

# Connect Vibium to Scrapfly's stealth Cloud Browser
browser = Browser(
    cdp_url=cdp_url,
    llm_provider="openai",
    llm_model="gpt-4o",
)

# Navigate to a page
browser.go("https://web-scraping.dev/products")

# Use natural language to interact and extract data
data = browser.extract("Get the main heading and first paragraph")
print("Result:", data)

browser.close()
```

 

 

 [ Python SDK docs → ](https://scrapfly.io/docs/sdk/python) 

 

 

 

---

 LEARN## Docs, Guides, and Real Agent Examples

Everything you need to ship your first AI browser agent.

 

 ### Getting Started

Connect Browser Use or Stagehand in under five minutes, with a working agent example.

 [ View getting started docs → ](https://scrapfly.io/docs/cloud-browser-api/getting-started) 



 

 ### MCP Server

Expose the browser agent as an MCP tool for Claude, GPT, or any agentic pipeline.

 [ View MCP Server docs → ](https://scrapfly.io/products/mcp-cloud) 



 

 ### Cloud Browser API

Direct CDP WebSocket access to Scrapium stealth Chromium for Playwright, Puppeteer, and Selenium.

 [ View Cloud Browser API docs → ](https://scrapfly.io/products/cloud-browser-api) 



 

 ### AI Web Scraping API

Schema-based extraction without an agent loop. Structured JSON from any URL in one API call.

 [ View AI Web Scraping API docs → ](https://scrapfly.io/products/web-scraping-api) 



 

 

 

---

  // INTEGRATIONS## Seamlessly integrate with frameworks &amp; platforms

Plug Scrapfly into your favorite tools, or build custom workflows with our first-class SDKs.

 ### No-code automation

 [  Zapier ](https://scrapfly.io/integration/zapier) [  Make ](https://scrapfly.io/integration/make) [  n8n ](https://scrapfly.io/integration/n8n) 

 

### LLM &amp; RAG frameworks

 [  LlamaIndex ](https://scrapfly.io/integration/llamaindex) [  LangChain ](https://scrapfly.io/integration/langchain) [  CrewAI ](https://scrapfly.io/integration/crewai) 

 

### First-class SDKs

 [  Python pip install scrapfly-sdk ](https://scrapfly.io/docs/sdk/python) [  TypeScript Node, Deno, Bun ](https://scrapfly.io/docs/sdk/typescript) [  Go go get scrapfly-sdk ](https://scrapfly.io/docs/sdk/golang) [  Rust cargo add scrapfly-sdk ](https://scrapfly.io/docs/sdk/rust) [  Scrapy Full-feature extension ](https://scrapfly.io/docs/sdk/scrapy) 

 

 

 [ See all integrations  ](https://scrapfly.io/integration) 

 

---

  FAQ## Frequently Asked Questions

 

  ### What is Scrapfly's AI Browser Agent?

 It is cloud browser infrastructure built specifically for AI-driven automation frameworks. You give the agent a goal in natural language - "extract all product prices from this category page" - and it plans the browser steps, executes them inside a stealth Chromium instance, and returns structured JSON. Works with Browser Use (Python), Stagehand (TypeScript), Vibium, or any CDP-compatible agent loop.

 

   ### How is this different from regular browser automation?

 Traditional Playwright or Puppeteer scripts require hard-coded CSS or XPath selectors that break every time the site changes. An AI browser agent reasons about the page like a human - it reads labels, headings, and visible text to decide what to click or extract. When the DOM changes, it adapts instead of throwing an error.

 

   ### Which AI framework should I use?

 **Browser Use** is the best choice for Python developers who want full multi-step agent automation with model-agnostic LLM support (GPT-4, Claude, Gemini, etc.). **Stagehand** suits TypeScript developers who prefer a hybrid AI and imperative model with caching to reduce LLM costs. **Vibium** offers a minimal one-liner API for quick extractions. All three connect to Scrapfly via the same CDP WebSocket URL.

 

   ### Do I need my own LLM API key?

 Yes. The agent frameworks (Browser Use, Stagehand, Vibium) use your LLM key for reasoning. Scrapfly provides the browser infrastructure - stealth Chromium, proxy pool, session management - while you choose the AI model. This lets you control LLM costs and pick the model best suited to your task.

 

   ### How does the agent stay undetected on protected sites?

 The agent runs inside Scrapium, Scrapfly's patched stealth Chromium. Scrapium presents coherent TLS, HTTP/2, Canvas, WebGL, and AudioContext fingerprints - indistinguishable from a real desktop browser. Integrated residential proxies across 190+ countries are available via a single URL parameter. Cloudflare, DataDome, Akamai, PerimeterX, and similar systems see a human-like session.

 

   ### How does pricing work?

 You pay for browser time and bandwidth, not AI complexity. Browser time is billed per 30-second interval of active session. Bandwidth is billed per MB (datacenter or residential pool). LLM API costs are separate and billed directly by your AI provider. There are no per-task fees or fixed seats.

 

   ### Can I maintain session state across multiple agent runs?

 Yes. Pass a `session` parameter in the WebSocket URL and the browser preserves cookies, local storage, and IndexedDB between connections. Your agent can log in on the first run and pick up the authenticated session on every subsequent run - no re-authentication needed.

 

  

 

  ---

 // PRICING## Usage-based pricing. Only pay for active browser time.

Browser time and bandwidth only. No per-task fees, no minimum seats. Volume discounts kick in automatically as your agents scale.

 

  **Free tier**1,000 free credits on signup. No credit card required.

 

 

  **Pay on success**Credits are consumed per 30s of active session. Idle time is free.

 

 

  **No lock-in**Upgrade, downgrade, or cancel anytime. No contract.

 

 

 

 [ See pricing  ](https://scrapfly.io/pricing) [ Start free ](https://scrapfly.io/register) 

 

 

### Need more from the Scrapfly stack?

 Every agent session runs on [Scrapium](https://scrapfly.io/scrapium), our stealth Chromium fork patched at the C++ level. For direct Playwright / Puppeteer / Selenium control use [Browser API](https://scrapfly.io/products/cloud-browser-api); for structured JSON without an agent loop use [Extraction API](https://scrapfly.io/products/extraction-api); for high-volume scraping with [anti-bot bypass](https://scrapfly.io/bypass) use [Web Scraping API](https://scrapfly.io/products/web-scraping-api); for LLM tool-use via Model Context Protocol use [MCP Cloud](https://scrapfly.io/products/mcp-cloud); for raw HTTP with perfect TLS fingerprints use [Curlium](https://scrapfly.io/curlium). Not sure what a target uses? [Scan it first](https://scrapfly.io/products/antibot-detector).

 

 [Get Free API Key](https://scrapfly.io/register)1,000 free credits. No card.