  // PRODUCT# The Best Cloud Browser API

 Drive a stealth Chromium browser in the cloud with Scrapfly's cloud browser API. Connect Playwright, Puppeteer, or Selenium over CDP. Anti-detect by default, session persistence, no Docker, no fleet management.

##  CDP drop-in. Playwright, Puppeteer, Selenium. Stealth by default via Scrapium. 

- **Zero infra overhead.** No Docker images to maintain, no Chrome version upgrades, no browser fleet to scale. Connect and go.
- **Anti-detect built in.** 500+ patched Chromium source files, 30,000+ browser signals from real device profiles, residential proxies in 190+ countries.
 
 [ Get Free API Key ](https://scrapfly.io/register) [ Developer Docs ](https://scrapfly.io/docs/cloud-browser-api/getting-started) 

 Pay only for what you use. No minimum commitment. 

 





'; var lanesEl = el.querySelector('[data-lanes]'); // Engine slug → marketing-source SVG (filled silhouette, FA-style). // The two icon URLs are pre-resolved by Twig's asset() helper and // stored on the wrapper as data-attrs, so the  picks up the // correct CDN-prefixed URL in prod and the same-origin URL in dev // without the JS having to know about it. var engineIcons = { CURLIUM: el.dataset.curliumIcon, SCRAPIUM: el.dataset.scrapiumIcon }; function engineIconSrc (engineName) { return engineIcons[engineName] || ''; } function buildLane (idx, initialStep) { var job = pickJob(); var laneEl = document.createElement('div'); laneEl.className = 'anim-scrape__lane'; laneEl.innerHTML = '' + '' + '' + job.country + '/' + job.pool + '' + '`' + job.path + '`' + '' + stageNames\[initialStep\] + '' + '' + '

' + '

'; return { el: laneEl, engineIconEl: laneEl.querySelector('[data-engine-icon]'), geoEl: laneEl.querySelector('[data-geo]'), pathEl: laneEl.querySelector('[data-path]'), stageEl: laneEl.querySelector('[data-stage-text]'), elapsedEl: laneEl.querySelector('[data-elapsed]'), barEl: laneEl.querySelector('[data-bar]'), step: initialStep, cumul: 0, job: job }; } // Three lanes, each starting at a different stage so they look // genuinely parallel from the first frame (no staggered fade-in). var lanes = [buildLane(0, 0), buildLane(1, 1), buildLane(2, 2)]; lanes.forEach(function (l) { lanesEl.appendChild(l.el); }); // Live RPS readout — sums lane completions over a rolling window. var statRpsEl = el.querySelector('[data-rps]'); var completionTimes = []; function updateRps () { var now = Date.now(); completionTimes = completionTimes.filter(function (t) { return now - t &lt; 1000; }); statRpsEl.textContent = completionTimes.length === 0 ? '—' : String(completionTimes.length * 3); } function tickLane (lane) { var s = lane.step; var deltas = lane.job.deltas; if (s &lt; deltas.length) { var d = deltas[s][0] + Math.random() * (deltas[s][1] - deltas[s][0]); lane.cumul += d; lane.stageEl.textContent = stageNames[s]; lane.elapsedEl.textContent = Math.round(lane.cumul) + 'ms'; } lane.barEl.setAttribute('data-progress', String(s)); lane.step++; if (lane.step &gt; deltas.length) { // Cycle complete — record completion, then re-roll the entire // job (path + engine + geo + pool) so each lane keeps showing // realistic Scrapfly variety rather than reusing the same // engine/path forever. completionTimes.push(Date.now()); lane.step = 0; lane.cumul = 0; lane.job = pickJob(); lane.pathEl.textContent = lane.job.path; lane.geoEl.textContent = lane.job.country + '/' + lane.job.pool; lane.engineIconEl.src = engineIconSrc(lane.job.engine); lane.engineIconEl.alt = lane.job.engine; lane.engineIconEl.title = lane.job.engine; lane.stageEl.textContent = stageNames[0]; lane.elapsedEl.textContent = ''; } } // Each lane ticks on its own interval — staggered so they don't // synchronize over time (~480ms ± per-lane jitter). var intervals = lanes.map(function (lane, i) { // Phase-stagger first tick by 160ms × lane index. setTimeout(function () { tickLane(lane); }, i * 160); return setInterval(function () { tickLane(lane); }, 460 + i * 40); }); var rpsInterval = setInterval(updateRps, 250); // Return the first interval so the existing teardown contract holds. // (No teardown is currently invoked, but stay symmetrical with the // other drivers that all return one setInterval handle.) void intervals; void rpsInterval; return intervals[0]; }, browser: function (el) { el.innerHTML = 'CDP EVENTS ' + '
'; var feed = el.querySelector('[data-feed]'); // Each event carries a realistic [minDelta, maxDelta] in ms — the // gap from the *previous* event in the same request flight. Numbers // mirror what Chrome DevTools shows on a real CDP trace: tens of ms // between network events, hundreds for DOM/load milestones, ~10ms // for Input dispatch. Hardcoded timestamps are dropped from detail // strings so the feed-time column is the single source of truth. var events = [ ['Network.requestWillBeSent', 'GET web-scraping.dev/abc', [15, 25]], ['Page.frameStartedLoading', 'frame=main', [5, 15]], ['Network.responseReceived', 'status=200, type=document', [40, 120]], ['Page.domContentEventFired', 'frame=main', [180, 320]], ['Runtime.executionContextCreated', 'origin=web-scraping.dev', [10, 25]], ['DOM.documentUpdated', 'nodes=1,284', [20, 60]], ['Page.loadEventFired', 'frame=main', [120, 240]], ['Network.dataReceived', '124.3 KB', [15, 45]], ['Input.dispatchMouseEvent', 'click (842, 316)', [5, 15]] ]; var i = 0; // tCdp is the simulated CDP clock in ms, NOT wall-clock time. It // resets at the start of each request flight (every full cycle of // events) so the feed reads as a fresh trace, not a 30-minute // session log. var tCdp = 0; function tick () { var idx = i % events.length; if (idx === 0) tCdp = 0; var ev = events[idx]; var jitter = ev[2][0] + Math.random() * (ev[2][1] - ev[2][0]); tCdp += jitter; var dt = Math.round(tCdp) + 'ms'; var li = document.createElement('li'); li.innerHTML = '' + dt + '' + '' + ev\[0\] + '' + '' + ev\[1\] + ''; feed.insertBefore(li, feed.firstChild); while (feed.children.length &gt; 6) feed.removeChild(feed.lastChild); i++; } for (var k = 0; k &lt; 4; k++) tick(); return setInterval(tick, 950); }, screenshot: function (el) { el.innerHTML = 'CAPTURING ' + '' + '' + '

' + '

' + '

' + '' + 'PNG' + 'JPEG' + 'WEBP' + 'FULL PAGE' + '

' + '

'; var fmts = el.querySelectorAll('[data-fmt]'); var shutter = el.querySelector('[data-shutter]'); var spec = el.querySelector('[data-spec]'); var elapsed = el.querySelector('[data-elapsed]'); // Each format combines a realistic viewport spec, capture latency, // and resulting payload size. Numbers cross-checked against // Scrapfly screenshot benchmarks: PNG/JPEG/WEBP on 1920×1080 land // 180-400ms; full-page on a long article scrolls + stitches and // takes 700-1200ms. var presets = [ { dim: '1920×1080', size: '184 KB', latencyMs: [180, 320] }, { dim: '1920×1080', size: '92 KB', latencyMs: [160, 260] }, { dim: '1920×1080', size: '76 KB', latencyMs: [200, 360] }, { dim: '1920×6840', size: '1.4 MB', latencyMs: [780, 1180] } ]; var step = 0; var anim = null; function tick () { var p = presets[step]; var latency = Math.round(p.latencyMs[0] + Math.random() * (p.latencyMs[1] - p.latencyMs[0])); spec.textContent = p.dim + ' • ' + p.size; elapsed.textContent = latency + 'ms'; // Web Animations API for the shutter sweep — replaces a CSS // transition + offsetWidth-reflow restart trick. Each tick we // cancel the previous animation and run a fresh one; WAAPI keeps // the work on the compositor thread, so no main-thread reflow. if (anim) anim.cancel(); anim = shutter.animate( [{ width: '0%' }, { width: '100%' }], { duration: latency, easing: 'cubic-bezier(.2,.8,.2,1)', fill: 'forwards' } ); fmts.forEach(function (f, i) { f.classList.toggle('anim-screenshot__format--active', i === step); }); step = (step + 1) % fmts.length; } tick(); return setInterval(tick, 1500); }, extract: function (el) { el.innerHTML = 'SCHEMA HYDRATION ' + '' + '{ name: \_\_\_\_\_\_\_\_\_\_\_\_,
' + ' price: \_\_\_\_\_\_\_\_\_\_\_\_,
' + ' in\_stock: \_\_\_\_,
' + ' rating: \_\_\_\_ }' + '

'; var records = [ { name: '"Widget Pro"', price: '49.99', in_stock: 'true', rating: '4.7' }, { name: '"Acme Runner"', price: '129.00', in_stock: 'true', rating: '4.3' }, { name: '"Vintage Chair"', price: '340.00', in_stock: 'false', rating: '4.9' }, { name: '"Coffee Grinder"', price: '89.50', in_stock: 'true', rating: '4.6' } ]; var keys = ['name', 'price', 'in_stock', 'rating']; var stat = el.querySelector('[data-stat]'); // Counter that ticks up each completed record so the panel reads // as "ongoing batch extraction" rather than a single shot demo. var totalRecords = 0; var rec = 0, step = 0; function tick () { var key = keys[step]; var field = el.querySelector('[data-field="' + key + '"]'); if (field) { field.textContent = records[rec % records.length][key]; field.className = 'v v-new'; } step++; if (step &gt;= keys.length) { step = 0; rec++; totalRecords++; if (stat) stat.textContent = totalRecords.toLocaleString('en-US') + ' records • ~340ms/rec'; setTimeout(function () { keys.forEach(function (k) { var f = el.querySelector('[data-field="' + k + '"]'); if (!f) return; f.textContent = k === 'in_stock' || k === 'rating' ? '____' : '____________'; f.className = 'pending'; }); }, 600); } } // Faster field reveal — 250ms feels like a template extraction // (regex/CSS), not a slow LLM dribble. Total per-record: ~1s. return setInterval(tick, 250); }, crawl: function (el) { el.innerHTML = '' + '**0 urls discovered**' + 'depth 1/5 • 0 req/s' + '

' + '```
web-scraping.dev/
```

'; var countEl = el.querySelector('[data-count]'); var depthEl = el.querySelector('[data-depth]'); var rpsEl = el.querySelector('[data-rps]'); var treeEl = el.querySelector('[data-tree]'); var branches = [ '├─ /products (1,284 pages)', '│ ├─ /products/shoes (392)', '│ ├─ /products/bags (218)', '│ └─ /products/accessories (674)', '├─ /articles (3,902 pages)', '│ ├─ /articles/2024/', '│ └─ /articles/2025/', '├─ /reviews (8,401)', '└─ /sitemap.xml' ]; // Counter starts plausible, climbs by realistic-per-tick batches // (~10 req/s sustained = 65/tick at 650ms cadence; we vary per // tick to read as live discovery rather than a clock). var count = 1, branchIdx = 0, depth = 1; function tick () { var batch = 50 + Math.floor(Math.random() * 60); count += batch; countEl.textContent = count.toLocaleString('en-US'); // RPS oscillates around 8-15 — the typical Scrapfly crawler // throttle for a single seed under default politeness. rpsEl.textContent = String(8 + Math.floor(Math.random() * 8)); if (branchIdx &lt; branches.length) { treeEl.innerHTML += '\n' + branches[branchIdx]; branchIdx++; depth = Math.min(5, 1 + Math.floor(branchIdx / 2)); depthEl.textContent = String(depth); } else { setTimeout(function () { treeEl.innerHTML = 'web-scraping.dev/'; branchIdx = 0; depth = 1; count = 1; depthEl.textContent = '1'; countEl.textContent = '1'; }, 1400); branchIdx = branches.length + 1; } } return setInterval(tick, 650); } }; document.querySelectorAll('[data-hero-anim]').forEach(function (el) { var kind = el.getAttribute('data-hero-anim'); var driver = drivers[kind]; if (driver) driver(el); }); })(); 

 

 

---

## 500+

patched Chromium source files in Scrapium

 



 

## 30k+

browser signals spoofed from real device profiles

 



 

## 190+

countries for geo-located browser sessions

 



 

## &lt;1s

browser ready time on every cold start

 



 

 

 

---

 CAPABILITIES## One CDP Endpoint, Every Capability

Stealth browser, persistent sessions, geo-located IPs, session replay. All behind a WebSocket URL.

 

 ### CDP Drop-In: Playwright, Puppeteer, Selenium, Stagehand

Change one line in your existing script. Use the Scrapfly SDK to build a typed `BrowserConfig`, then feed the resulting URL into `connect_over_cdp()`, `puppeteer.connect()`, `chromedp.NewRemoteAllocator()`, or `chromiumoxide::Browser::connect()`. Every page interaction, selector, wait, and event handler keeps working exactly as before. Stealth, proxies, and session persistence activate at the connection layer, below your automation code.

  **CDP** native protocol 

  **Stealth** 550+ patches 

  **Session** cookies + storage 

  **190+ countries** residential IPs 

 

  **Your Automation Code** Playwright, Puppeteer, Selenium, Stagehand, browser-use. Unchanged. 

 

  **CDP WebSocket Endpoint** wss://browser.scrapfly.io, auth via API key, session param optional 

 

  **Scrapium Browser** 550+ patched Chromium files, 4,000+ coherent signals. Not JS shims. 

 

  **Session Pool** persistent cookies, localStorage, proxy stickiness across reconnects 

 

  **Residential Proxy** geo-matched exit IP, timezone, locale, and language, all aligned 

 

  **Target Site** sees a real Chrome browser from a real residential location 

 

 

 [Playwright](https://scrapfly.io/docs/cloud-browser-api/playwright) 

 [Puppeteer](https://scrapfly.io/docs/cloud-browser-api/puppeteer) 

 [Selenium](https://scrapfly.io/docs/cloud-browser-api/selenium) 

 [Stagehand](https://scrapfly.io/docs/cloud-browser-api/stagehand) 

 

[View getting started docs →](https://scrapfly.io/docs/cloud-browser-api/getting-started)

 



 

 

 ### Stealth by Default, via Scrapium

Every session runs on Scrapium, Scrapfly's custom Chromium fork. 550+ patched source files make the browser undetectable at the signal level. Real device profiles cover Canvas, WebGL, WebGPU, Audio, Fonts, Speech, Media Devices, Screen, and every JS API surface, with no mismatches between layers.

  **550+** patched files 

  **4,000+** signals 

  **real** device profiles 

 

[Fingerprint scanner](https://scrapfly.io/web-scraping-tools/device-fingerprint)

[JA3/JA4 checker](https://scrapfly.io/web-scraping-tools/ja3-fingerprint)

[WebGL](https://scrapfly.io/web-scraping-tools/webgl-fingerprint)

[Canvas](https://scrapfly.io/web-scraping-tools/canvas-fingerprint)

 

[Learn about Scrapium →](https://scrapfly.io/scrapium)

 



 

 ### Persistent Sessions and Auth State

Resume where you left off. Cookie jars and localStorage persist across connections on the same session ID. Log in once, close the connection, and reconnect later to continue from the authenticated state. Pair with proxy stickiness to keep the same exit IP for the full duration of a workflow.

  **Connect with session param** wss://browser.scrapfly.io?key=...&amp;session=my-session 

 

  **Automate and log in** form fills, clicks, MFA. Browser acts on the page normally. 

 

  **State saved automatically** cookies, localStorage, sessionStorage, proxy assignment persisted 

 

  **Reconnect, already authenticated** same session ID restores full browser state, no re-login needed 

 

 

[session\_resume](https://scrapfly.io/docs/cloud-browser-api/session-resume)

cookie persistence

proxy stickiness

auth state

 

[View session docs →](https://scrapfly.io/docs/cloud-browser-api/session-resume)

 



 

 

 ### Geo-Located Browsers, 190+ Countries

Set the country at connection time and the browser exits through a residential IP in that region. The browser OS locale, timezone, and language also match, so geo-checks pass at every layer, not just the IP address.

  **190+** countries 

  **residential** exit IPs 

  **locale** matching 

  **timezone** auto-aligned 

 

Proxy sticky



Auto rotation



IP cooling



 

 



 

 ### Built-In Recording and Replay

Every session is recorded. Each connection returns a replay URL you can open in the dashboard to watch exactly what the browser saw, including DOM snapshots, network requests, and console output. Debug without guesswork.

  **Replay** visual 

  **DOM** snapshots 

  **Console** log capture 

 

[session replay](https://scrapfly.io/docs/cloud-browser-api/monitoring)

HAR waterfall

 

[View monitoring docs →](https://scrapfly.io/docs/cloud-browser-api/monitoring)

 



 

 

 ### AI Agent Ready

Any CDP-compatible agent framework connects to the same WebSocket endpoint. The browser handles stealth and session continuity so the agent focuses on the task. Tested with browser-use, Stagehand, LangChain, LlamaIndex, and CrewAI.

  **browser-use** LLM agent 

  **Stagehand** TypeScript agent 

  **LangChain** CDP tool 

  **Any CDP agent** same WSS URL 

 

 [browser-use docs](https://scrapfly.io/docs/cloud-browser-api/browser-use) 

 [Stagehand docs](https://scrapfly.io/docs/cloud-browser-api/stagehand) 

 [AI Browser Agent](https://scrapfly.io/products/ai-browser-agent) 

 [Agent overview](https://scrapfly.io/docs/cloud-browser-api/agent-browser) 

 

[View AI agent docs →](https://scrapfly.io/docs/cloud-browser-api/agent-browser)

 



 

 

 ### Native HTML to Markdown

Get LLM-ready content directly from the browser. The Scrapfly-extended `Page.getRenderedContent` CDP method returns the fully-rendered DOM as GitHub-flavored Markdown, converted in the browser process via the Gumbo HTML5 parser — no external HTML cleanup or post-processing needed.

  **HTML or Markdown** format=html|markdown 

  **Iframe inlining** full document tree 

  **URL resolution** relative → absolute 

  **Binary fallback** base64 for PDF/images 

 

[View CDP reference →](https://scrapfly.io/docs/cloud-browser-api/getting-started)

 



 

 

 ### Zero Infrastructure

No Docker images, no Chrome version upgrades, no pool capacity planning. Browsers auto-scale on demand. Scrapium tracks every Chrome stable release automatically. You write automation code; the fleet runs itself.

  **Auto-update** Chrome releases 

  **Auto-scale** on demand 

 

 



 

 ### Browser Extensions

Upload your own Chrome extensions and attach them to sessions. Useful for custom request interception, ad blocking, or injecting helpers that run inside every page. Scrapium also spoofs the presence of common extensions to pass real-user fingerprint checks.

Load custom

Spoof installed

 

[View CDP reference →](https://scrapfly.io/docs/cloud-browser-api/cdp-reference)

 



 

 ### Sibling Products

The Cloud Browser API gives you raw CDP access to Scrapium. For different abstraction levels, the same infrastructure is available via sibling products.

 [Web Scraping API - all-in-one](https://scrapfly.io/products/web-scraping-api) 

 [Scrapium - stealth browser internals](https://scrapfly.io/scrapium) 

 [Curlium - raw HTTP, no browser](https://scrapfly.io/curlium) 

 [AI Browser Agent - AI-native](https://scrapfly.io/products/ai-browser-agent) 

 

 



 

 

 ### Verify the Stealth Yourself

Run these free tools against a Scrapfly Cloud Browser session to confirm signal consistency before integrating.

 [Fingerprint scanner](https://scrapfly.io/web-scraping-tools/device-fingerprint) 

 [JA3/JA4 checker](https://scrapfly.io/web-scraping-tools/ja3-fingerprint) 

 [WebGL probe](https://scrapfly.io/web-scraping-tools/webgl-fingerprint) 

 [Canvas hash](https://scrapfly.io/web-scraping-tools/canvas-fingerprint) 

 [Audio context](https://scrapfly.io/web-scraping-tools/audio-fingerprint) 

 [HTTP/2 fingerprint](https://scrapfly.io/web-scraping-tools/http2-fingerprint) 

 

[Open fingerprint tools →](https://scrapfly.io/web-scraping-tools/device-fingerprint)

 



 

 ### Transparent Billing

You pay for active browser time (per minute) and bandwidth. Crashed sessions and failed connects are free. No per-request fees, no hidden pool costs, no minimum commitment.

  **Per-minute** browser time 

  **Free** crashed sessions 

 

[View billing docs →](https://scrapfly.io/docs/cloud-browser-api/billing)

 



 

 

 

---

 CODE## Connect in One Line

Change `launch()` to `connect_over_cdp()`. Everything else stays the same.

 

 [ Playwright ](#ba-strat-playwright) [ Puppeteer ](#ba-strat-puppeteer) [ Selenium ](#ba-strat-selenium) [ AI Agent (browser-use) ](#ba-strat-browser-use) [ AI Agent (Stagehand) ](#ba-strat-stagehand) 

Connect your existing Playwright (or chromedp / chromiumoxide) code to the cloud browser over CDP.

     Python TypeScript Go Rust  

     

 ```
from scrapfly import ScrapflyClient, BrowserConfig
from playwright.sync_api import sync_playwright

client = ScrapflyClient(key="YOUR_API_KEY")

# Configure the Cloud Browser session (proxy pool, OS fingerprint,
# session pinning, block rules, captcha solver, etc.)
config = BrowserConfig(
    proxy_pool="datacenter",
    os="linux",
)

# SDK builds the signed wss:// URL with all options encoded
ws_url = client.cloud_browser(config)

with sync_playwright() as p:
    browser = p.chromium.connect_over_cdp(ws_url)
    context = browser.contexts[0]
    page = context.pages[0] if context.pages else context.new_page()

    page.goto("https://web-scraping.dev/products")
    print(page.title())
    browser.close()
```

 ```
import { ScrapflyClient, BrowserConfig } from 'scrapfly-sdk';
import { chromium } from 'playwright';

const client = new ScrapflyClient({ key: 'YOUR_API_KEY' });

// Configure the Cloud Browser session (proxy pool, OS fingerprint,
// session pinning, block rules, captcha solver, etc.)
const config = new BrowserConfig({
  proxy_pool: 'datacenter',
  os: 'linux',
});

// SDK builds the signed wss:// URL with all options encoded
const wsUrl = client.cloudBrowser(config);

const browser = await chromium.connectOverCDP(wsUrl);
const context = browser.contexts()[0];
const page = context.pages()[0] ?? await context.newPage();

await page.goto('https://web-scraping.dev/products');
console.log(await page.title());
await browser.close();
```

 ```
package main

import (
	"context"
	"fmt"
	"log"

	"github.com/chromedp/chromedp"
	scrapfly "github.com/scrapfly/go-scrapfly"
)

func main() {
	client, err := scrapfly.New("YOUR_API_KEY")
	if err != nil {
		log.Fatalf("failed to create client: %v", err)
	}

	// Configure the Cloud Browser session (proxy pool, OS fingerprint,
	// session pinning, block rules, captcha solver, etc.)
	config := &scrapfly.CloudBrowserConfig{
		ProxyPool: "datacenter",
		OS:        "linux",
	}

	// SDK builds the signed wss:// URL with all options encoded
	wsURL := client.CloudBrowser(config)

	allocCtx, cancel := chromedp.NewRemoteAllocator(context.Background(), wsURL)
	defer cancel()

	ctx, cancel := chromedp.NewContext(allocCtx)
	defer cancel()

	var title string
	if err := chromedp.Run(ctx,
		chromedp.Navigate("https://web-scraping.dev/products"),
		chromedp.Title(&title),
	); err != nil {
		log.Fatalf("chromedp run failed: %v", err)
	}

	fmt.Println(title)
}
```

 ```
use chromiumoxide::{Browser, BrowserConfig as OxideBrowserConfig};
use futures::StreamExt;
use scrapfly_sdk::{BrowserConfig, Client};

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    let client = Client::builder().api_key("YOUR_API_KEY").build()?;

    // Configure the Cloud Browser session (proxy pool, OS fingerprint,
    // session pinning, block rules, captcha solver, etc.)
    let config = BrowserConfig {
        proxy_pool: Some("datacenter".into()),
        os: Some("linux".into()),
        ..Default::default()
    };

    // SDK builds the signed wss:// URL with all options encoded
    let ws_url = client.cloud_browser_url(&config);

    let (browser, mut handler) =
        Browser::connect(OxideBrowserConfig::builder().ws_endpoint(ws_url).build()?).await?;

    let _handle = tokio::spawn(async move { while let Some(_) = handler.next().await {} });

    let page = browser.new_page("https://web-scraping.dev/products").await?;
    let title = page.get_title().await?.unwrap_or_default();
    println!("{}", title);

    browser.close().await?;
    Ok(())
}
```

 

 

 [ Python SDK docs → ](https://scrapfly.io/docs/sdk/python) [ TypeScript SDK docs → ](https://scrapfly.io/docs/sdk/typescript) [ Go SDK docs → ](https://scrapfly.io/docs/sdk/golang) [ Rust SDK docs → ](https://scrapfly.io/docs/sdk/rust) 

 

Drop-in Puppeteer support via `puppeteer.connect({ browserWSEndpoint })`.

     Node.js  

  

 ```
import { ScrapflyClient, BrowserConfig } from 'scrapfly-sdk';
import puppeteer from 'puppeteer-core';

const client = new ScrapflyClient({ key: 'YOUR_API_KEY' });

// Configure the Cloud Browser session (proxy pool, OS fingerprint,
// session pinning, block rules, captcha solver, etc.)
const config = new BrowserConfig({
  proxy_pool: 'datacenter',
  os: 'linux',
});

// SDK builds the signed wss:// URL with all options encoded
const wsUrl = client.cloudBrowser(config);

const browser = await puppeteer.connect({ browserWSEndpoint: wsUrl });
const page = await browser.newPage();

await page.goto('https://web-scraping.dev/products');
console.log(await page.title());
await browser.close();
```

 

 

 [ TypeScript SDK docs → ](https://scrapfly.io/docs/sdk/typescript) 

 

Classic WebDriver workflows, cloud-hosted Chromium.

     Python TypeScript  

   

 ```
# Selenium can't natively connect to a remote CDP WebSocket, so the official
# pattern uses the SDK to build the Cloud Browser URL, then Playwright as the
# CDP transport. Your page logic stays WebDriver-style.
from scrapfly import ScrapflyClient, BrowserConfig
from playwright.sync_api import sync_playwright

client = ScrapflyClient(key="YOUR_API_KEY")

config = BrowserConfig(
    proxy_pool="datacenter",
    os="linux",
    country="us",
)

ws_url = client.cloud_browser(config)

with sync_playwright() as p:
    browser = p.chromium.connect_over_cdp(ws_url)
    context = browser.contexts[0]
    page = context.pages[0] if context.pages else context.new_page()

    page.goto("https://web-scraping.dev/products")
    print(page.title())
    browser.close()
```

 ```
// Selenium can't natively connect to a remote CDP WebSocket, so the official
// pattern uses the SDK to build the Cloud Browser URL, then Playwright as the
// CDP transport. Your page logic stays WebDriver-style.
import { ScrapflyClient, BrowserConfig } from 'scrapfly-sdk';
import { chromium } from 'playwright';

const client = new ScrapflyClient({ key: 'YOUR_API_KEY' });

const config = new BrowserConfig({
  proxy_pool: 'datacenter',
  os: 'linux',
  country: 'us',
});

const wsUrl = client.cloudBrowser(config);

const browser = await chromium.connectOverCDP(wsUrl);
const context = browser.contexts()[0];
const page = context.pages()[0] ?? await context.newPage();

await page.goto('https://web-scraping.dev/products');
console.log(await page.title());
await browser.close();
```

 

 

 [ Python SDK docs → ](https://scrapfly.io/docs/sdk/python) [ TypeScript SDK docs → ](https://scrapfly.io/docs/sdk/typescript) 

 

Attach an LLM agent to the cloud browser. Works with [AI Browser Agent](https://scrapfly.io/"/products/ai-browser-agent") out of the box.

     Python  

  

 ```
import asyncio

from browser_use import Agent, Browser, ChatBrowserUse
from scrapfly import BrowserConfig

# Build the CDP WebSocket URL via the Scrapfly Python SDK.
# `BrowserConfig` handles encoding, defaults, and the Cloud Browser host.
cdp_url = BrowserConfig(
    proxy_pool="residential",
    country="us",
    os="windows",
).websocket_url(api_key="YOUR_API_KEY")

# connect browser-use to Scrapfly's stealth Cloud Browser
browser = Browser(cdp_url=cdp_url)

agent = Agent(
    task="scrape the product data from https://web-scraping.dev/products",
    llm=ChatBrowserUse(),
    browser=browser,
)


async def main():
    await agent.run()


asyncio.run(main())
```

 

 

 [ Python SDK docs → ](https://scrapfly.io/docs/sdk/python) 

 

Stagehand + cloud browser for stealth AI browsing in TypeScript.

     TypeScript  

  

 ```
import { Stagehand } from "@browserbasehq/stagehand";
import { BrowserConfig } from "scrapfly-sdk";

async function main() {
  // Build the CDP WebSocket URL via the Scrapfly TypeScript SDK.
  // `BrowserConfig` handles encoding, defaults, and the Cloud Browser host.
  const cdpUrl = new BrowserConfig({
    proxy_pool: "residential",
    country: "us",
    os: "windows",
  }).websocketUrl("YOUR_API_KEY");

  // initialize Stagehand against Scrapfly's stealth Cloud Browser over CDP
  const stagehand = new Stagehand({
    env: "LOCAL",  // LOCAL = bring-your-own-CDP (Scrapfly), not Browserbase
    localBrowserLaunchOptions: { cdpUrl },
  });

  await stagehand.init();

  // create an agent with specific model configuration
  const agent = stagehand.agent({
    model: {
      modelName: "openai/computer-use-preview",
      apiKey: "YOUR_OPENAI_API_KEY",
    },
    systemPrompt: "You are an AI browser automation agent. Follow instructions precisely.",
  });

  const task = `
    Go to https://web-scraping.dev/products
    Extract the product names and prices
    Return the data as JSON
  `;

  const result = await agent.execute(task);
  console.log("Agent workflow result:", result);

  await stagehand.close();
}

main();
```

 

 

 [ TypeScript SDK docs → ](https://scrapfly.io/docs/sdk/typescript) 

 

 

 

---

 LEARN## Docs, Tools, and Guides

Everything you need to go from first connection to production automation.

 

 ### Getting Started

WebSocket endpoint, auth, first session, and billing explained in one place.

 [ Developer Docs → ](https://scrapfly.io/docs/cloud-browser-api/getting-started) 



 

 ### CDP Reference

Extended Chrome DevTools Protocol domains, custom commands, and Scrapium extensions.

 [ Browse reference → ](https://scrapfly.io/docs/cloud-browser-api/cdp-reference) 



 

 ### Academy

Hands-on courses on headless browsers, anti-bot bypass, and automation at scale.

 [ Start learning → ](https://scrapfly.io/academy/headless-browsers) 



 

 ### Fingerprint Tools

Test Canvas, WebGL, TLS, and browser signal consistency on your own machine.

 [ Open tools → ](https://scrapfly.io/web-scraping-tools/device-fingerprint) 



 

 

 

---

  // INTEGRATIONS## Seamlessly integrate with frameworks &amp; platforms

Plug Scrapfly into your favorite tools, or build custom workflows with our first-class SDKs.

 ### No-code automation

 [  Zapier ](https://scrapfly.io/integration/zapier) [  Make ](https://scrapfly.io/integration/make) [  n8n ](https://scrapfly.io/integration/n8n) 

 

### LLM &amp; RAG frameworks

 [  LlamaIndex ](https://scrapfly.io/integration/llamaindex) [  LangChain ](https://scrapfly.io/integration/langchain) [  CrewAI ](https://scrapfly.io/integration/crewai) 

 

### First-class SDKs

 [  Python pip install scrapfly-sdk ](https://scrapfly.io/docs/sdk/python) [  TypeScript Node, Deno, Bun ](https://scrapfly.io/docs/sdk/typescript) [  Go go get scrapfly-sdk ](https://scrapfly.io/docs/sdk/golang) [  Rust cargo add scrapfly-sdk ](https://scrapfly.io/docs/sdk/rust) [  Scrapy Full-feature extension ](https://scrapfly.io/docs/sdk/scrapy) 

 

 

 [ See all integrations  ](https://scrapfly.io/integration) 

 

---

  FAQ## Frequently Asked Questions

 

  ### What is Scrapfly Cloud Browser API?

 It's a managed headless Chrome service you connect to via CDP WebSocket. Instead of running `browser.launch()` locally, you build a `BrowserConfig` with the Scrapfly SDK, pass the resulting URL to `connect_over_cdp()`, and your existing Playwright, Puppeteer, Selenium, chromedp, or chromiumoxide code runs against our cloud-hosted Scrapium browser. No Docker, no Chrome fleet, no upgrades.

 

   ### How is it different from running Chrome locally?

 Locally you run stock Chromium, which anti-bot systems fingerprint reliably. Scrapium has 500+ patched source files and loads 30,000+ signals from real device profiles at startup, covering every JS API surface. Residential proxies and human-like mouse/keyboard events are also built in, so detection happens at zero layers rather than one.

 

   ### Do I need to rewrite my Playwright scripts?

 No. Build a `BrowserConfig` with the Scrapfly SDK (Python, TypeScript, Go, or Rust), get the WebSocket URL, and pass it to `p.chromium.connect_over_cdp(ws_url)`. All your existing selectors, waits, click chains, and assertions keep working. Puppeteer users replace `puppeteer.launch()` with `puppeteer.connect({browserWSEndpoint})` and pass the same URL.

 

   ### How do persistent sessions work?

 Pass a `session` parameter in the WebSocket URL. The service preserves cookies, local storage, and IndexedDB between connections on that session ID. You can log in once, close the connection, and reconnect later to continue where you left off. Session lifetime and proxy stickiness are both configurable.

 

   ### How does billing work?

 You are charged for browser time (per minute the session is open) and bandwidth (data transferred through the browser). There are no per-request fees and no minimum commitment. Bandwidth costs drop significantly with built-in caching and resource stubbing. See [billing docs](https://scrapfly.io/"/docs/cloud-browser-api/billing") for per-plan rates and cost examples.

 

   ### Can I use it with AI agent frameworks?

 Yes. browser-use, Stagehand, LangChain, LlamaIndex, CrewAI, and any framework that accepts a CDP URL work out of the box. The browser handles stealth, so the agent focuses on the task rather than fighting anti-bot systems. See the [AI agent docs](https://scrapfly.io/"/docs/cloud-browser-api/agent-browser") for setup guides.

 

   ### What anti-bot systems can Scrapium bypass?

 Cloudflare Turnstile and challenge pages, DataDome, PerimeterX/HUMAN, Akamai, Kasada, Shape Security, and 90+ shields total. The bypass strategies are continuously updated. You can also enable unblock mode on the connection for maximum bypass coverage on the toughest targets.

 

  

 

  ---

 // PRICING## Pay per minute. No minimums.

Browser time and bandwidth. Two clear dimensions, no hidden fees. Volume discounts kick in automatically as you scale.

 

  **Free tier**1,000 free credits on signup. No credit card required.

 

 

  **Pay on success**Crashed sessions and failed connects are free. You only pay for active browser time.

 

 

  **No lock-in**Upgrade, downgrade, or cancel anytime. No contract.

 

 

 

 [ See pricing  ](https://scrapfly.io/pricing) [ Start free ](https://scrapfly.io/register) 

 

 

### Need a simpler interface? We unbundle the stack.

 The Cloud Browser API gives you direct CDP access to [Scrapium](https://scrapfly.io/scrapium), our stealth Chromium fork. For batteries-included scraping with [anti-bot bypass](https://scrapfly.io/bypass), proxies, and JS rendering try [Web Scraping API](https://scrapfly.io/products/web-scraping-api); for AI-native browsing see [AI Browser Agent](https://scrapfly.io/products/ai-browser-agent); for structured JSON without a browser loop see [Extraction API](https://scrapfly.io/products/extraction-api); for multi-page traversal see [Crawler API](https://scrapfly.io/products/crawler-api); for raw HTTP with perfect TLS fingerprints try [Curlium](https://scrapfly.io/curlium).

 

 [Get Free API Key](https://scrapfly.io/register)Pay only for what you use. No card needed to start.