# Scrapfly Documentation

## Table of Contents

### Dashboard

- [Intro](https://scrapfly.io/docs)
- [Project](https://scrapfly.io/docs/project)
- [Account](https://scrapfly.io/docs/account)
- [Workspace & Team](https://scrapfly.io/docs/workspace-and-team)
- [Billing](https://scrapfly.io/docs/billing)

### Products

#### MCP Server

- [Getting Started](https://scrapfly.io/docs/mcp/getting-started)
- [Tools & API Spec](https://scrapfly.io/docs/mcp/tools)
- [Authentication](https://scrapfly.io/docs/mcp/authentication)
- [Examples & Use Cases](https://scrapfly.io/docs/mcp/examples)
- [FAQ](https://scrapfly.io/docs/mcp/faq)
##### Integrations

- [Overview](https://scrapfly.io/docs/mcp/integrations)
- [Claude Desktop](https://scrapfly.io/docs/mcp/integrations/claude-desktop)
- [Claude Code](https://scrapfly.io/docs/mcp/integrations/claude-code)
- [ChatGPT](https://scrapfly.io/docs/mcp/integrations/chatgpt)
- [Cursor](https://scrapfly.io/docs/mcp/integrations/cursor)
- [Cline](https://scrapfly.io/docs/mcp/integrations/cline)
- [Windsurf](https://scrapfly.io/docs/mcp/integrations/windsurf)
- [Zed](https://scrapfly.io/docs/mcp/integrations/zed)
- [Roo Code](https://scrapfly.io/docs/mcp/integrations/roo-code)
- [VS Code](https://scrapfly.io/docs/mcp/integrations/vscode)
- [LangChain](https://scrapfly.io/docs/mcp/integrations/langchain)
- [LlamaIndex](https://scrapfly.io/docs/mcp/integrations/llamaindex)
- [CrewAI](https://scrapfly.io/docs/mcp/integrations/crewai)
- [OpenAI](https://scrapfly.io/docs/mcp/integrations/openai)
- [n8n](https://scrapfly.io/docs/mcp/integrations/n8n)
- [Make](https://scrapfly.io/docs/mcp/integrations/make)
- [Zapier](https://scrapfly.io/docs/mcp/integrations/zapier)
- [Vapi AI](https://scrapfly.io/docs/mcp/integrations/vapi)
- [Agent Builder](https://scrapfly.io/docs/mcp/integrations/agent-builder)
- [Custom Client](https://scrapfly.io/docs/mcp/integrations/custom-client)


#### Web Scraping API

- [Getting Started](https://scrapfly.io/docs/scrape-api/getting-started)
- [API Specification]()
- [Monitoring](https://scrapfly.io/docs/monitoring)
- [Customize Request](https://scrapfly.io/docs/scrape-api/custom)
- [Debug](https://scrapfly.io/docs/scrape-api/debug)
- [Anti Scraping Protection](https://scrapfly.io/docs/scrape-api/anti-scraping-protection)
- [Proxy](https://scrapfly.io/docs/scrape-api/proxy)
- [Proxy Mode](https://scrapfly.io/docs/scrape-api/proxy-mode)
- [Proxy Mode - Screaming Frog](https://scrapfly.io/docs/scrape-api/proxy-mode/screaming-frog)
- [Proxy Mode - Apify](https://scrapfly.io/docs/scrape-api/proxy-mode/apify)
- [(Auto) Data Extraction](https://scrapfly.io/docs/scrape-api/extraction)
- [Javascript Rendering](https://scrapfly.io/docs/scrape-api/javascript-rendering)
- [Javascript Scenario](https://scrapfly.io/docs/scrape-api/javascript-scenario)
- [SSL](https://scrapfly.io/docs/scrape-api/ssl)
- [DNS](https://scrapfly.io/docs/scrape-api/dns)
- [Cache](https://scrapfly.io/docs/scrape-api/cache)
- [Session](https://scrapfly.io/docs/scrape-api/session)
- [Webhook](https://scrapfly.io/docs/scrape-api/webhook)
- [Screenshot](https://scrapfly.io/docs/scrape-api/screenshot)
- [Errors](https://scrapfly.io/docs/scrape-api/errors)
- [Timeout](https://scrapfly.io/docs/scrape-api/understand-timeout)
- [Throttling](https://scrapfly.io/docs/throttling)
- [Troubleshoot](https://scrapfly.io/docs/scrape-api/troubleshoot)
- [Billing](https://scrapfly.io/docs/scrape-api/billing)
- [FAQ](https://scrapfly.io/docs/scrape-api/faq)

#### Crawler API

- [Getting Started](https://scrapfly.io/docs/crawler-api/getting-started)
- [API Specification]()
- [Retrieving Results](https://scrapfly.io/docs/crawler-api/results)
- [WARC Format](https://scrapfly.io/docs/crawler-api/warc-format)
- [Data Extraction](https://scrapfly.io/docs/crawler-api/extraction-rules)
- [Webhook](https://scrapfly.io/docs/crawler-api/webhook)
- [Billing](https://scrapfly.io/docs/crawler-api/billing)
- [Errors](https://scrapfly.io/docs/crawler-api/errors)
- [Troubleshoot](https://scrapfly.io/docs/crawler-api/troubleshoot)
- [FAQ](https://scrapfly.io/docs/crawler-api/faq)

#### Screenshot API

- [Getting Started](https://scrapfly.io/docs/screenshot-api/getting-started)
- [API Specification]()
- [Accessibility Testing](https://scrapfly.io/docs/screenshot-api/accessibility)
- [Webhook](https://scrapfly.io/docs/screenshot-api/webhook)
- [Billing](https://scrapfly.io/docs/screenshot-api/billing)
- [Errors](https://scrapfly.io/docs/screenshot-api/errors)

#### Extraction API

- [Getting Started](https://scrapfly.io/docs/extraction-api/getting-started)
- [API Specification]()
- [Rules Template](https://scrapfly.io/docs/extraction-api/rules-and-template)
- [LLM Extraction](https://scrapfly.io/docs/extraction-api/llm-prompt)
- [AI Auto Extraction](https://scrapfly.io/docs/extraction-api/automatic-ai)
- [Webhook](https://scrapfly.io/docs/extraction-api/webhook)
- [Billing](https://scrapfly.io/docs/extraction-api/billing)
- [Errors](https://scrapfly.io/docs/extraction-api/errors)
- [FAQ](https://scrapfly.io/docs/extraction-api/faq)

#### Proxy Saver

- [Getting Started](https://scrapfly.io/docs/proxy-saver/getting-started)
- [Fingerprints](https://scrapfly.io/docs/proxy-saver/fingerprints)
- [Optimizations](https://scrapfly.io/docs/proxy-saver/optimizations)
- [SSL Certificates](https://scrapfly.io/docs/proxy-saver/certificates)
- [Protocols](https://scrapfly.io/docs/proxy-saver/protocols)
- [Pacfile](https://scrapfly.io/docs/proxy-saver/pacfile)
- [Secure Credentials](https://scrapfly.io/docs/proxy-saver/security)
- [Billing](https://scrapfly.io/docs/proxy-saver/billing)

#### Cloud Browser API

- [Getting Started](https://scrapfly.io/docs/cloud-browser-api/getting-started)
- [Proxy & Geo-Targeting](https://scrapfly.io/docs/cloud-browser-api/proxy)
- [Unblock API](https://scrapfly.io/docs/cloud-browser-api/unblock)
- [File Downloads](https://scrapfly.io/docs/cloud-browser-api/file-downloads)
- [Session Resume](https://scrapfly.io/docs/cloud-browser-api/session-resume)
- [Human-in-the-Loop](https://scrapfly.io/docs/cloud-browser-api/human-in-the-loop)
- [Debug Mode](https://scrapfly.io/docs/cloud-browser-api/debug-mode)
- [Bring Your Own Proxy](https://scrapfly.io/docs/cloud-browser-api/bring-your-own-proxy)
- [Browser Extensions](https://scrapfly.io/docs/cloud-browser-api/extensions)
##### Integrations

- [Puppeteer](https://scrapfly.io/docs/cloud-browser-api/puppeteer)
- [Playwright](https://scrapfly.io/docs/cloud-browser-api/playwright)
- [Selenium](https://scrapfly.io/docs/cloud-browser-api/selenium)
- [Vercel Agent Browser](https://scrapfly.io/docs/cloud-browser-api/agent-browser)
- [Browser Use](https://scrapfly.io/docs/cloud-browser-api/browser-use)
- [Stagehand](https://scrapfly.io/docs/cloud-browser-api/stagehand)

- [Billing](https://scrapfly.io/docs/cloud-browser-api/billing)
- [Errors](https://scrapfly.io/docs/cloud-browser-api/errors)


### Tools

- [Antibot Detector](https://scrapfly.io/docs/tools/antibot-detector)

### SDK

- [Golang](https://scrapfly.io/docs/sdk/golang)
- [Python](https://scrapfly.io/docs/sdk/python)
- [Rust](https://scrapfly.io/docs/sdk/rust)
- [TypeScript](https://scrapfly.io/docs/sdk/typescript)
- [Scrapy](https://scrapfly.io/docs/sdk/scrapy)

### Integrations

- [Getting Started](https://scrapfly.io/docs/integration/getting-started)
- [LangChain](https://scrapfly.io/docs/integration/langchain)
- [LlamaIndex](https://scrapfly.io/docs/integration/llamaindex)
- [CrewAI](https://scrapfly.io/docs/integration/crewai)
- [Zapier](https://scrapfly.io/docs/integration/zapier)
- [Make](https://scrapfly.io/docs/integration/make)
- [n8n](https://scrapfly.io/docs/integration/n8n)

### Academy

- [Overview](https://scrapfly.io/academy)
- [Web Scraping Overview](https://scrapfly.io/academy/scraping-overview)
- [Tools](https://scrapfly.io/academy/tools-overview)
- [Reverse Engineering](https://scrapfly.io/academy/reverse-engineering)
- [Static Scraping](https://scrapfly.io/academy/static-scraping)
- [HTML Parsing](https://scrapfly.io/academy/html-parsing)
- [Dynamic Scraping](https://scrapfly.io/academy/dynamic-scraping)
- [Hidden API Scraping](https://scrapfly.io/academy/hidden-api-scraping)
- [Headless Browsers](https://scrapfly.io/academy/headless-browsers)
- [Hidden Web Data](https://scrapfly.io/academy/hidden-web-data)
- [JSON Parsing](https://scrapfly.io/academy/json-parsing)
- [Data Processing](https://scrapfly.io/academy/data-processing)
- [Scaling](https://scrapfly.io/academy/scaling)
- [Walkthrough Summary](https://scrapfly.io/academy/walkthrough-summary)
- [Scraper Blocking](https://scrapfly.io/academy/scraper-blocking)
- [Proxies](https://scrapfly.io/academy/proxies)

---

# Unblock API

 [  View as markdown ](https://scrapfly.io/?view=markdown)   Copy for LLM    Copy for LLM  [     Open in ChatGPT ](https://chatgpt.com/?hints=search&prompt=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fcloud-browser-api%2Funblock%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Claude ](https://claude.ai/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fcloud-browser-api%2Funblock%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Perplexity ](https://www.perplexity.ai/search/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fcloud-browser-api%2Funblock%20so%20I%20can%20ask%20questions%20about%20it.) 

 

 

 1. [Cloud Browser](https://scrapfly.io/docs/cloud-browser-api/getting-started)
2. Unblock API
 
  The **Unblock API** automatically bypasses anti-bot protection for a target URL using residential proxies, then returns a WebSocket URL so you can connect with Playwright or Puppeteer to a browser session that's already past the protection.

  **Beta Feature:** Cloud Browser is currently in beta. 

## How It Works

 The Unblock API combines Scrapfly's advanced anti-bot bypass technology with Cloud Browser's remote browser access:

1. **Request:** Send a POST request with your target URL and optional configuration
2. **Bypass:** Scrapfly allocates a browser and navigates to the URL through a residential proxy with Anti Scraping Protection (ASP) enabled
3. **Handoff:** Once the page loads successfully, you receive a WebSocket URL
4. **Connect:** Use Playwright or Puppeteer to connect and interact with the unblocked page
 


  **Use Case:** Perfect for scraping sites with strong anti-bot protection where you need to interact with the page after bypassing the protection (clicking buttons, filling forms, scrolling, etc.). 

## Session Management

 The Unblock API uses a two-phase model. The **first call** performs a full anti-bot bypass through a residential proxy — this takes around **30-40 seconds**. The response includes a `session_id` that you should store and pass back in subsequent calls to the same domain.

 When you pass the `session` parameter in a follow-up call, Scrapfly reuses the existing browser profile, proxy, cookies (including challenge tokens like `cf_clearance`), and fingerprint. This **fast path** skips the bypass entirely and responds in approximately **3 seconds**.

  **Browser continuity:** The browser that connects to the `ws_url` runs behind the **same residential IP and fingerprint** as the ASP scrape that solved the challenge. Cookies set during the bypass (such as `cf_clearance`) are already present in the browser profile when you connect — no manual cookie injection is needed. 

 | Call | `session` param | What happens | Typical latency |
|---|---|---|---|
| First (cold) | *omitted* | Full ASP bypass — new browser, new residential proxy, anti-bot challenge solved | ~30-40 seconds |
| Subsequent (warm) | `session_id` from previous response | Browser profile, cookies, IP, and fingerprint reused — bypass skipped | ~3 seconds |

> **One connection per session:** Each `ws_url` supports a single WebSocket connection at a time. To run multiple browsers in parallel, make separate `/unblock` calls without passing `session` — each call creates an independent browser with its own bypass session.

### Session Flow Example

 ```
from scrapfly import ScrapflyClient

client = ScrapflyClient(key="")

# First call — full bypass (~30-40s), no session param
result = client.cloud_browser_unblock(url="https://protected-site.com/page")
session_id = result["session_id"]   # store this
ws_url = result["ws_url"]           # already contains the session param

# Connect to the already-unblocked browser
# browser = await playwright.chromium.connect_over_cdp(ws_url)

# The Python SDK does not yet expose a `session` parameter on
# cloud_browser_unblock — to reuse a session on a subsequent call, POST to
# `/unblock` directly via requests with the `session` field. See the raw HTTP
# tabs below for the full request shape.

# If the session is blocked or expired — force a fresh bypass by omitting session
result = client.cloud_browser_unblock(url="https://protected-site.com/page")
session_id = result["session_id"]   # store the new session_id
```

 

   

 

  **Tip:** The `ws_url` returned in every response already contains the `session` query parameter — you can pass it directly to `connectOverCDP()` without modification. Store `session_id` separately only when you need to construct follow-up `/unblock` calls. 

## Session Lifecycle

 When the Unblock API responds, an ASP session is **leased** to you for the duration of your browser connection. The session holds the bypass state — cookies, fingerprint, residential IP, and Chrome profile — that allows you to interact with the protected page.

- **Leased on connect:** The session is exclusively yours while the browser is connected via WebSocket
- **Released on disconnect:** When you call `browser.close()` or drop the WebSocket connection, the session is released back to the pool
- **Expires after inactivity:** Sessions expire after **15 minutes** of inactivity with no connected browser
 
  **Important:** Always call `browser.close()` when finished to release the session and stop billing. Sessions are billed until explicitly closed or they reach the `browser_timeout` limit. 

## API Endpoint

 | **Method** | `POST` |
|---|---|
| **URL** | `https://browser.scrapfly.io/unblock` |
| **Authentication** | Query parameter: `?key=YOUR_API_KEY` |
| **Content-Type** | `application/json` |

### Request Parameters

 Unblock always uses a residential proxy — there is no proxy pool selection. The available parameters are:

 | Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| `url` | string | Yes | - | Target URL to navigate to and bypass protection |
| `session` | string | No | - | Session ID from a previous `/unblock` response (the `session_id` field). When provided, the existing browser profile, cookies, residential IP, and fingerprint are reused — bypassing the anti-bot challenge entirely (~3 seconds instead of ~30-40 seconds). Omit to force a fresh bypass with a new session. |
| `country` | string | No | `""` (auto) | ISO country code for residential proxy geolocation (e.g., `us`, `gb`, `de`). Ignored when `session` is provided (the original session's country is reused). |
| `timeout` | integer | No | `60` | Navigation timeout in seconds (max 300) |
| `browser_timeout` | integer | No | `900` | Browser session timeout in seconds (max 1800) |

### Response Format

On success, the API returns a JSON response with:

 ```
{
    "ws_url": "wss://browser.scrapfly.io/?api_key=YOUR_API_KEY&session=unblock-1234567890",
    "session_id": "unblock-1234567890",
    "run_id": "01HWXYZ..."
}
```

 

   

 

 | Field | Description |
|---|---|
| `ws_url` | WebSocket URL to connect with Playwright/Puppeteer. Already contains the `session` query parameter — pass it directly to `connectOverCDP()` or `puppeteer.connect()` without modification. |
| `session_id` | Session identifier. Store this value and pass it as the `session` parameter in subsequent `/unblock` calls to the same domain to activate the fast path (~3 seconds). |
| `run_id` | Unique run identifier for debugging and support |

## Basic Usage

Select your preferred SDK or tool to see a complete example:

    Python SDK    TypeScript SDK    Go SDK    Puppeteer    Playwright JS    cURL  

 Install the SDK: `pip install scrapfly-sdk playwright && playwright install chromium`

 ```
from scrapfly import ScrapflyClient
from playwright.sync_api import sync_playwright

client = ScrapflyClient(key="")

# Step 1: Call the Unblock API — residential proxy is always used
result = client.cloud_browser_unblock(url="https://web-scraping.dev/products", country="us")

# Step 2: Connect with Playwright to the already-unblocked session
with sync_playwright() as p:
    browser = p.chromium.connect_over_cdp(result["ws_url"])
    page = browser.contexts[0].pages[0]

    # Page is already loaded and past anti-bot protection!
    print(page.title())
    print(page.url)

    # Scrape data, click buttons, fill forms...
    content = page.content()
    print(f"HTML length: {len(content)}")

    browser.close()
```

 

   

 

 

Install the SDK: `npm install scrapfly-sdk playwright`

 ```
import { ScrapflyClient } from "scrapfly-sdk";
import { chromium } from "playwright";

const client = new ScrapflyClient({ key: "" });

// Step 1: Call the Unblock API — residential proxy is always used
const result = await client.cloudBrowserUnblock({ url: "https://web-scraping.dev/products", country: "us" });

// Step 2: Connect with Playwright to the already-unblocked session
const browser = await chromium.connectOverCDP(result.ws_url);
const page = browser.contexts()[0].pages()[0];

// Page is already loaded and past anti-bot protection!
console.log(await page.title());
console.log(page.url());

// Scrape data, click buttons, fill forms...
const content = await page.content();
console.log(`HTML length: ${content.length}`);

await browser.close();
```

 

   

 

 

Install: `go get github.com/scrapfly/go-scrapfly` and `go get github.com/chromedp/chromedp`

 ```
package main

import (
    "context"
    "fmt"
    "log"

    scrapfly "github.com/scrapfly/go-scrapfly"
    "github.com/chromedp/chromedp"
)

func main() {
    client, err := scrapfly.New("")
    if err != nil {
        log.Fatal(err)
    }

    // Step 1: Call the Unblock API — residential proxy is always used
    result, err := client.CloudBrowserUnblock(scrapfly.UnblockConfig{
        URL:     "https://web-scraping.dev/products",
        Country: "us",
    })
    if err != nil {
        log.Fatal(err)
    }

    fmt.Printf("Session: %s\n", result.SessionID)

    // Step 2: Connect to the already-unblocked browser via chromedp
    allocCtx, cancel := chromedp.NewRemoteAllocator(context.Background(), result.WSURL)
    defer cancel()

    ctx, cancel := chromedp.NewContext(allocCtx)
    defer cancel()

    var title string
    if err := chromedp.Run(ctx, chromedp.Title(&title)); err != nil {
        log.Fatal(err)
    }
    fmt.Println("Page title:", title)
}
```

 

   

 

 

 ```
const puppeteer = require('puppeteer-core');

const API_KEY = '';

async function unblockAndScrape() {
    // Step 1: Call the Unblock API — residential proxy is always used
    const response = await fetch(`https://browser.scrapfly.io/unblock?api_key=${API_KEY}`, {
        method: 'POST',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({
            url: 'https://web-scraping.dev/products',
            country: 'us'
        })
    });

    const { ws_url, session_id } = await response.json();
    console.log('Unblock successful! Session:', session_id);

    // Step 2: Connect to the unblocked browser
    const browser = await puppeteer.connect({
        browserWSEndpoint: ws_url
    });

    // Page is already loaded and past anti-bot protection!
    const pages = await browser.pages();
    const page = pages[0] || await browser.newPage();
    console.log('Page title:', await page.title());

    // Scrape data, click buttons, fill forms...
    const content = await page.content();
    console.log('Page HTML length:', content.length);

    await browser.close();
}

unblockAndScrape();
```

 

   

 

 

 ```
const { chromium } = require('playwright');

const API_KEY = '';

async function unblockAndScrape() {
    // Step 1: Call the Unblock API — residential proxy is always used
    const response = await fetch(`https://browser.scrapfly.io/unblock?api_key=${API_KEY}`, {
        method: 'POST',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({
            url: 'https://web-scraping.dev/products',
            country: 'us'
        })
    });

    const { ws_url, session_id } = await response.json();
    console.log('Unblock successful! Session:', session_id);

    // Step 2: Connect via CDP
    const browser = await chromium.connectOverCDP(ws_url);
    const context = browser.contexts()[0];
    const page = context.pages()[0] || await context.newPage();

    // Page is already loaded — interact with it!
    console.log('Page title:', await page.title());

    const data = await page.evaluate(() => document.body.innerText);
    console.log('Page text:', data.substring(0, 200));

    await browser.close();
}

unblockAndScrape();
```

 

   

 

 

Call the Unblock API to get a WebSocket URL, then connect with Playwright or Puppeteer:

 ```
curl -X POST "https://browser.scrapfly.io/unblock?api_key=" \
    -H "Content-Type: application/json" \
    -d '{
        "url": "https://web-scraping.dev/products",
        "country": "us"
    }'
```

 

   

 

The response contains `ws_url` — pass it to `chromium.connectOverCDP()` or `puppeteer.connect()`.

 

 

## Geo-Targeting

 Use the `country` parameter to route traffic through a residential proxy in a specific country. This is useful for geo-restricted content or when the target site serves different content per region.

 ```
{
    "url": "https://web-scraping.dev/products",
    "country": "gb"
}
```

 

   

 

Pass any ISO 3166-1 alpha-2 country code (e.g., `us`, `gb`, `de`, `fr`, `jp`). Omit the parameter or leave it empty for automatic proxy selection.

## Error Handling

The Unblock API may return these error codes:

 | Error Code | HTTP Status | Description |
|---|---|---|
| `ERR::BROWSER::CONFIG_ERROR` | 400 | Invalid configuration (missing URL, invalid parameters) |
| `ERR::BROWSER::ALLOCATION_FAILED` | 503 | Failed to allocate browser (capacity issue) |
| `ERR::BROWSER::NAVIGATION_TIMEOUT` | 504 | Page navigation timed out |
| `ERR::BROWSER::UNBLOCK_FAILED` | 503 | Failed to bypass anti-bot protection |
| `ERR::BROWSER::TOO_MANY_CONCURRENT_REQUEST` | 429 | Concurrency limit exceeded |

### Error Response Format

 ```
{
    "error_id": "ERR::BROWSER::NAVIGATION_TIMEOUT",
    "message": "Page load timed out after 60 seconds",
    "http_code": 504,
    "is_retryable": true
}
```

 

   

 

### Retry Strategy

 ```
async function unblockWithRetry(url, maxRetries = 3) {
    for (let attempt = 1; attempt <= maxRetries; attempt++) {
        try {
            const response = await fetch(`https://browser.scrapfly.io/unblock?api_key=${API_KEY}`, {
                method: 'POST',
                headers: { 'Content-Type': 'application/json' },
                body: JSON.stringify({ url })
            });

            if (!response.ok) {
                const error = await response.json();
                if (error.is_retryable && attempt < maxRetries) {
                    console.log(`Attempt ${attempt} failed, retrying...`);
                    await new Promise(r => setTimeout(r, 2000 * attempt)); // Exponential backoff
                    continue;
                }
                throw new Error(error.message);
            }

            return await response.json();
        } catch (error) {
            if (attempt === maxRetries) throw error;
        }
    }
}

// Usage
const result = await unblockWithRetry('https://web-scraping.dev/products');
console.log('Success:', result.ws_url);
```

 

   

 

## Billing

 The Unblock API is billed the same as regular Cloud Browser sessions:

- Session time is billed per 30 seconds (rounded up)
- Bandwidth is billed separately
- Residential proxies are always used for unblock — see [Cloud Browser Billing](https://scrapfly.io/docs/cloud-browser-api/billing) for detailed pricing
 
## Unblock API vs Regular Connection

 | Feature | Unblock API | Regular WebSocket Connection |
|---|---|---|
| Anti-bot bypass | Automatic | Manual (via ASP parameter) |
| Proxy type | Residential (always) | Datacenter or residential (configurable) |
| Initial navigation | Handled by API | You handle it |
| Page state on connect | Page already loaded | Empty browser |
| Session reuse (fast path) | ~3s on warm session | N/A |
| Use case | Post-bypass interaction | Full browser control |

## Related Documentation

- [Getting Started](https://scrapfly.io/docs/cloud-browser-api/getting-started) - Introduction to Cloud Browser API
- [Session Resume](https://scrapfly.io/docs/cloud-browser-api/session-resume) - Reconnect to browser sessions
- [Human-in-the-Loop](https://scrapfly.io/docs/cloud-browser-api/human-in-the-loop) - Visual debugging for browser sessions
- [Puppeteer Integration](https://scrapfly.io/docs/cloud-browser-api/puppeteer) - Puppeteer-specific documentation
- [Playwright Integration](https://scrapfly.io/docs/cloud-browser-api/playwright) - Playwright-specific documentation
- [Billing](https://scrapfly.io/docs/cloud-browser-api/billing) - Understand session costs
- [Error Reference](https://scrapfly.io/docs/cloud-browser-api/errors) - Full error code reference