# Scrapfly Documentation

## Table of Contents

### Dashboard

- [Intro](https://scrapfly.io/docs)
- [Project](https://scrapfly.io/docs/project)
- [Account](https://scrapfly.io/docs/account)
- [Workspace & Team](https://scrapfly.io/docs/workspace-and-team)
- [Billing](https://scrapfly.io/docs/billing)

### Products

#### MCP Server

- [Getting Started](https://scrapfly.io/docs/mcp/getting-started)
- [Tools & API Spec](https://scrapfly.io/docs/mcp/tools)
- [Authentication](https://scrapfly.io/docs/mcp/authentication)
- [Examples & Use Cases](https://scrapfly.io/docs/mcp/examples)
- [FAQ](https://scrapfly.io/docs/mcp/faq)
##### Integrations

- [Overview](https://scrapfly.io/docs/mcp/integrations)
- [Claude Desktop](https://scrapfly.io/docs/mcp/integrations/claude-desktop)
- [Claude Code](https://scrapfly.io/docs/mcp/integrations/claude-code)
- [ChatGPT](https://scrapfly.io/docs/mcp/integrations/chatgpt)
- [Cursor](https://scrapfly.io/docs/mcp/integrations/cursor)
- [Cline](https://scrapfly.io/docs/mcp/integrations/cline)
- [Windsurf](https://scrapfly.io/docs/mcp/integrations/windsurf)
- [Zed](https://scrapfly.io/docs/mcp/integrations/zed)
- [Roo Code](https://scrapfly.io/docs/mcp/integrations/roo-code)
- [VS Code](https://scrapfly.io/docs/mcp/integrations/vscode)
- [LangChain](https://scrapfly.io/docs/mcp/integrations/langchain)
- [LlamaIndex](https://scrapfly.io/docs/mcp/integrations/llamaindex)
- [CrewAI](https://scrapfly.io/docs/mcp/integrations/crewai)
- [OpenAI](https://scrapfly.io/docs/mcp/integrations/openai)
- [n8n](https://scrapfly.io/docs/mcp/integrations/n8n)
- [Make](https://scrapfly.io/docs/mcp/integrations/make)
- [Zapier](https://scrapfly.io/docs/mcp/integrations/zapier)
- [Vapi AI](https://scrapfly.io/docs/mcp/integrations/vapi)
- [Agent Builder](https://scrapfly.io/docs/mcp/integrations/agent-builder)
- [Custom Client](https://scrapfly.io/docs/mcp/integrations/custom-client)


#### Web Scraping API

- [Getting Started](https://scrapfly.io/docs/scrape-api/getting-started)
- [API Specification]()
- [Monitoring](https://scrapfly.io/docs/monitoring)
- [Customize Request](https://scrapfly.io/docs/scrape-api/custom)
- [Debug](https://scrapfly.io/docs/scrape-api/debug)
- [Anti Scraping Protection](https://scrapfly.io/docs/scrape-api/anti-scraping-protection)
- [Proxy](https://scrapfly.io/docs/scrape-api/proxy)
- [Proxy Mode](https://scrapfly.io/docs/scrape-api/proxy-mode)
- [Proxy Mode - Screaming Frog](https://scrapfly.io/docs/scrape-api/proxy-mode/screaming-frog)
- [Proxy Mode - Apify](https://scrapfly.io/docs/scrape-api/proxy-mode/apify)
- [(Auto) Data Extraction](https://scrapfly.io/docs/scrape-api/extraction)
- [Javascript Rendering](https://scrapfly.io/docs/scrape-api/javascript-rendering)
- [Javascript Scenario](https://scrapfly.io/docs/scrape-api/javascript-scenario)
- [SSL](https://scrapfly.io/docs/scrape-api/ssl)
- [DNS](https://scrapfly.io/docs/scrape-api/dns)
- [Cache](https://scrapfly.io/docs/scrape-api/cache)
- [Session](https://scrapfly.io/docs/scrape-api/session)
- [Webhook](https://scrapfly.io/docs/scrape-api/webhook)
- [Screenshot](https://scrapfly.io/docs/scrape-api/screenshot)
- [Errors](https://scrapfly.io/docs/scrape-api/errors)
- [Timeout](https://scrapfly.io/docs/scrape-api/understand-timeout)
- [Throttling](https://scrapfly.io/docs/throttling)
- [Troubleshoot](https://scrapfly.io/docs/scrape-api/troubleshoot)
- [Billing](https://scrapfly.io/docs/scrape-api/billing)
- [FAQ](https://scrapfly.io/docs/scrape-api/faq)

#### Crawler API

- [Getting Started](https://scrapfly.io/docs/crawler-api/getting-started)
- [API Specification]()
- [Retrieving Results](https://scrapfly.io/docs/crawler-api/results)
- [WARC Format](https://scrapfly.io/docs/crawler-api/warc-format)
- [Data Extraction](https://scrapfly.io/docs/crawler-api/extraction-rules)
- [Webhook](https://scrapfly.io/docs/crawler-api/webhook)
- [Billing](https://scrapfly.io/docs/crawler-api/billing)
- [Errors](https://scrapfly.io/docs/crawler-api/errors)
- [Troubleshoot](https://scrapfly.io/docs/crawler-api/troubleshoot)
- [FAQ](https://scrapfly.io/docs/crawler-api/faq)

#### Screenshot API

- [Getting Started](https://scrapfly.io/docs/screenshot-api/getting-started)
- [API Specification]()
- [Accessibility Testing](https://scrapfly.io/docs/screenshot-api/accessibility)
- [Webhook](https://scrapfly.io/docs/screenshot-api/webhook)
- [Billing](https://scrapfly.io/docs/screenshot-api/billing)
- [Errors](https://scrapfly.io/docs/screenshot-api/errors)

#### Extraction API

- [Getting Started](https://scrapfly.io/docs/extraction-api/getting-started)
- [API Specification]()
- [Rules Template](https://scrapfly.io/docs/extraction-api/rules-and-template)
- [LLM Extraction](https://scrapfly.io/docs/extraction-api/llm-prompt)
- [AI Auto Extraction](https://scrapfly.io/docs/extraction-api/automatic-ai)
- [Webhook](https://scrapfly.io/docs/extraction-api/webhook)
- [Billing](https://scrapfly.io/docs/extraction-api/billing)
- [Errors](https://scrapfly.io/docs/extraction-api/errors)
- [FAQ](https://scrapfly.io/docs/extraction-api/faq)

#### Proxy Saver

- [Getting Started](https://scrapfly.io/docs/proxy-saver/getting-started)
- [Fingerprints](https://scrapfly.io/docs/proxy-saver/fingerprints)
- [Optimizations](https://scrapfly.io/docs/proxy-saver/optimizations)
- [SSL Certificates](https://scrapfly.io/docs/proxy-saver/certificates)
- [Protocols](https://scrapfly.io/docs/proxy-saver/protocols)
- [Pacfile](https://scrapfly.io/docs/proxy-saver/pacfile)
- [Secure Credentials](https://scrapfly.io/docs/proxy-saver/security)
- [Billing](https://scrapfly.io/docs/proxy-saver/billing)

#### Cloud Browser API

- [Getting Started](https://scrapfly.io/docs/cloud-browser-api/getting-started)
- [Proxy & Geo-Targeting](https://scrapfly.io/docs/cloud-browser-api/proxy)
- [Unblock API](https://scrapfly.io/docs/cloud-browser-api/unblock)
- [File Downloads](https://scrapfly.io/docs/cloud-browser-api/file-downloads)
- [Session Resume](https://scrapfly.io/docs/cloud-browser-api/session-resume)
- [Human-in-the-Loop](https://scrapfly.io/docs/cloud-browser-api/human-in-the-loop)
- [Debug Mode](https://scrapfly.io/docs/cloud-browser-api/debug-mode)
- [Bring Your Own Proxy](https://scrapfly.io/docs/cloud-browser-api/bring-your-own-proxy)
- [Browser Extensions](https://scrapfly.io/docs/cloud-browser-api/extensions)
##### Integrations

- [Puppeteer](https://scrapfly.io/docs/cloud-browser-api/puppeteer)
- [Playwright](https://scrapfly.io/docs/cloud-browser-api/playwright)
- [Selenium](https://scrapfly.io/docs/cloud-browser-api/selenium)
- [Vercel Agent Browser](https://scrapfly.io/docs/cloud-browser-api/agent-browser)
- [Browser Use](https://scrapfly.io/docs/cloud-browser-api/browser-use)
- [Stagehand](https://scrapfly.io/docs/cloud-browser-api/stagehand)

- [Billing](https://scrapfly.io/docs/cloud-browser-api/billing)
- [Errors](https://scrapfly.io/docs/cloud-browser-api/errors)


### Tools

- [Antibot Detector](https://scrapfly.io/docs/tools/antibot-detector)

### SDK

- [Golang](https://scrapfly.io/docs/sdk/golang)
- [Python](https://scrapfly.io/docs/sdk/python)
- [Rust](https://scrapfly.io/docs/sdk/rust)
- [TypeScript](https://scrapfly.io/docs/sdk/typescript)
- [Scrapy](https://scrapfly.io/docs/sdk/scrapy)

### Integrations

- [Getting Started](https://scrapfly.io/docs/integration/getting-started)
- [LangChain](https://scrapfly.io/docs/integration/langchain)
- [LlamaIndex](https://scrapfly.io/docs/integration/llamaindex)
- [CrewAI](https://scrapfly.io/docs/integration/crewai)
- [Zapier](https://scrapfly.io/docs/integration/zapier)
- [Make](https://scrapfly.io/docs/integration/make)
- [n8n](https://scrapfly.io/docs/integration/n8n)

### Academy

- [Overview](https://scrapfly.io/academy)
- [Web Scraping Overview](https://scrapfly.io/academy/scraping-overview)
- [Tools](https://scrapfly.io/academy/tools-overview)
- [Reverse Engineering](https://scrapfly.io/academy/reverse-engineering)
- [Static Scraping](https://scrapfly.io/academy/static-scraping)
- [HTML Parsing](https://scrapfly.io/academy/html-parsing)
- [Dynamic Scraping](https://scrapfly.io/academy/dynamic-scraping)
- [Hidden API Scraping](https://scrapfly.io/academy/hidden-api-scraping)
- [Headless Browsers](https://scrapfly.io/academy/headless-browsers)
- [Hidden Web Data](https://scrapfly.io/academy/hidden-web-data)
- [JSON Parsing](https://scrapfly.io/academy/json-parsing)
- [Data Processing](https://scrapfly.io/academy/data-processing)
- [Scaling](https://scrapfly.io/academy/scaling)
- [Walkthrough Summary](https://scrapfly.io/academy/walkthrough-summary)
- [Scraper Blocking](https://scrapfly.io/academy/scraper-blocking)
- [Proxies](https://scrapfly.io/academy/proxies)

---

# Selenium Integration

 [  View as markdown ](https://scrapfly.io/?view=markdown)   Copy for LLM    Copy for LLM  [     Open in ChatGPT ](https://chatgpt.com/?hints=search&prompt=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fcloud-browser-api%2Fselenium%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Claude ](https://claude.ai/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fcloud-browser-api%2Fselenium%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Perplexity ](https://www.perplexity.ai/search/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fcloud-browser-api%2Fselenium%20so%20I%20can%20ask%20questions%20about%20it.) 

 

 

 1. [Cloud Browser](https://scrapfly.io/docs/cloud-browser-api/getting-started)
2. Selenium
 
  [Selenium](https://www.selenium.dev/) is the most widely-used browser automation framework supporting multiple browsers and languages. Connect it to Scrapfly Cloud Browser using the [Chrome DevTools Protocol (CDP)](https://chromedevtools.github.io/devtools-protocol/) for scalable automation with built-in proxies and fingerprinting.

  **Beta Feature:** Cloud Browser is currently in beta. 

## Installation &amp; Quick Start

Install Selenium and connect to Cloud Browser:

  **Important:** Selenium does not natively support connecting to remote CDP WebSocket URLs. Cloud Browser uses the [Chrome DevTools Protocol (CDP)](https://chromedevtools.github.io/devtools-protocol/), which is natively supported by [Playwright](https://scrapfly.io/docs/cloud-browser-api/playwright) and [Puppeteer](https://scrapfly.io/docs/cloud-browser-api/puppeteer). The examples below use Playwright as the CDP transport — providing the same functionality with a similar API. 

    Python SDK    Python    JavaScript/Node.js  

 ##### Installation

 ```
pip install scrapfly-sdk playwright && playwright install chromium
```

 

   

 

##### Quick Start Example

 ```
"""
Cloud Browser with Scrapfly SDK (recommended over Selenium).

The SDK generates the WebSocket URL and handles authentication.
Use Playwright for the browser automation.
"""
from scrapfly import ScrapflyClient, BrowserConfig
from playwright.sync_api import sync_playwright

client = ScrapflyClient(key='{{ YOUR_API_KEY }}')

config = BrowserConfig(
    proxy_pool='public_datacenter_pool',
    os='linux',
    country='us',
)

with sync_playwright() as p:
    browser = p.chromium.connect_over_cdp(client.cloud_browser(config))

    context = browser.contexts[0]
    page = context.pages[0] if context.pages else context.new_page()

    page.goto('https://web-scraping.dev/products')
    print('Page title:', page.title())

    # Selenium-style interactions using Playwright locators
    products = page.locator('.product-thumb').all()
    for product in products[:3]:
        title = product.locator('h3').inner_text()
        print(f'  Product: {title}')

    page.screenshot(path='screenshot.png')
    browser.close()

```

 

   

 

 

##### Installation

 ```
pip install requests playwright && playwright install chromium
```

 

   

 

##### Quick Start Example

 ```
"""
Selenium + Cloud Browser via Playwright CDP bridge.

Selenium does not natively support connecting to a remote CDP WebSocket URL.
This example uses Playwright as the CDP transport layer, providing a Selenium-like
experience with Cloud Browser's remote browsers.

For native CDP support, use Playwright directly:
  https://scrapfly.io/docs/cloud-browser-api/playwright
"""
import requests
from playwright.sync_api import sync_playwright

API_KEY = '{{ YOUR_API_KEY }}'

# Step 1: Discover the WebSocket URL via /json/version
version_info = requests.get(
    'https://browser.scrapfly.io/json/version',
    params={
        'key': API_KEY,
        'proxy_pool': 'datacenter',
        'os': 'linux',
    }
).json()

ws_url = version_info['webSocketDebuggerUrl']
print(f'Browser: {version_info["Browser"]}')

# Step 2: Connect via Playwright CDP
with sync_playwright() as p:
    browser = p.chromium.connect_over_cdp(ws_url)

    context = browser.contexts[0]
    page = context.pages[0] if context.pages else context.new_page()

    page.goto('https://web-scraping.dev/products')
    print('Page title:', page.title())

    # Take a screenshot
    page.screenshot(path='screenshot.png')

    browser.close()

```

 

   

 

  **How it works:** Cloud Browser exposes a standard `/json/version` endpoint that returns the WebSocket URL. Pass your browser configuration (country, proxy pool, etc.) as query parameters — they are automatically forwarded to the WebSocket connection. 

 

 

##### Installation

 ```
npm install scrapfly-sdk playwright
```

 

   

 

##### Quick Start Example

 ```
/**
 * Cloud Browser with Playwright for JavaScript.
 *
 * Selenium for JavaScript does not support remote CDP connections.
 * Use Playwright instead — it has native CDP support:
 *
 * npm install playwright scrapfly-sdk
 */
const { ScrapflyClient, BrowserConfig } = require('scrapfly-sdk');
const { chromium } = require('playwright');

const client = new ScrapflyClient({
    key: '{{ YOUR_API_KEY }}',
});

const config = new BrowserConfig({
    proxy_pool: 'public_datacenter_pool',
    os: 'linux',
});

async function run() {
    const wsUrl = client.cloudBrowser(config);
    const browser = await chromium.connectOverCDP(wsUrl);

    const context = browser.contexts()[0];
    const page = context.pages()[0] || await context.newPage();

    await page.goto('https://web-scraping.dev/products');
    console.log('Page title:', await page.title());

    await page.screenshot({ path: 'screenshot.png' });
    await browser.close();
}

run();

```

 

   

 

 

 

## WebSocket Connection Parameters

The Cloud Browser WebSocket URL accepts the following query parameters:

 | Parameter | Required | Default | Description |
|---|---|---|---|
| `api_key` | Yes | - | Your Scrapfly API key for authentication |
| `proxy_pool` | No | `datacenter` | Proxy network type: `datacenter` or `residential` |
| `os` | No | random | Operating system fingerprint: `linux`, `windows`, or `macos` |
| `browser_brand` | No | `chrome` | Chromium-based browser brand used for fingerprint generation. Valid values: `chrome`, `edge`, `brave`, `opera`. Invalid values are silently dropped and the default applies. |
| `session` | No | - | Optional session identifier for maintaining browser state across connections. See [Session Resume](https://scrapfly.io/docs/cloud-browser-api/session-resume). |
| `country` | No | - | Proxy country code (ISO 3166-1 alpha-2), e.g., `us`, `uk`, `de` |
| `auto_close` | No | `true` | Automatically stop the browser session when the CDP connection disconnects. Set to `false` to keep the browser alive for reconnection. |
| `timeout` | No | `900` | Maximum session duration in seconds (15 minutes default, 30 minutes max). |
| `debug` | No | `false` | Enable session recording for debugging. See [Debug Mode](https://scrapfly.io/docs/cloud-browser-api/debug-mode). |
| `block_images` | No | `false` | Stub image requests with a transparent 1x1 pixel. Reduces bandwidth while remaining invisible to anti-bot systems. |
| `block_styles` | No | `false` | Stub stylesheet requests with an empty CSS response. |
| `block_fonts` | No | `false` | Stub font requests with an empty response. |
| `block_media` | No | `false` | Stub video and audio media requests. |
| `blacklist` | No | `false` | Stub known analytics, tracking, and telemetry URLs. |
| `cache` | No | `false` | Enable HTTP cache for static resources. Cached bandwidth billed at 1 credit/MB. |

  **Stubbing vs Blocking:** Resources are **stubbed**, not blocked — the browser receives a valid but empty response (e.g. a transparent 1x1 pixel for images). This saves bandwidth while remaining invisible to anti-bot systems that detect blocked requests. 

## Data Extraction

Extract data from a dynamic page using Selenium's powerful element selection:

 ```
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

API_KEY = ''
BROWSER_WS = f'wss://browser.scrapfly.io?api_key={API_KEY}&proxy_pool=public_datacenter_pool'

chrome_options = Options()
chrome_options.add_experimental_option("debuggerAddress", BROWSER_WS)

driver = webdriver.Remote(
    command_executor='http://localhost:9515',
    options=chrome_options
)

try:
    # Navigate to the page
    driver.get('https://web-scraping.dev/products')

    # Wait for products to load
    WebDriverWait(driver, 10).until(
        EC.presence_of_element_located((By.CLASS_NAME, 'product'))
    )

    # Extract product data
    products = []
    product_elements = driver.find_elements(By.CLASS_NAME, 'product')

    for product in product_elements:
        title = product.find_element(By.CLASS_NAME, 'product-title').text
        price = product.find_element(By.CLASS_NAME, 'product-price').text
        url = product.find_element(By.TAG_NAME, 'a').get_attribute('href')

        products.append({
            'title': title,
            'price': price,
            'url': url
        })

    print('Products:', products)

finally:
    driver.quit()
```

 

   

 

## Form Interaction

Fill forms and handle login flows using Selenium's robust element interaction:

 ```
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

API_KEY = ''
BROWSER_WS = f'wss://browser.scrapfly.io?api_key={API_KEY}&proxy_pool=public_datacenter_pool'

chrome_options = Options()
chrome_options.add_experimental_option("debuggerAddress", BROWSER_WS)

driver = webdriver.Remote(
    command_executor='http://localhost:9515',
    options=chrome_options
)

try:
    driver.get('https://web-scraping.dev/login')

    # Wait for form to load
    WebDriverWait(driver, 10).until(
        EC.presence_of_element_located((By.ID, 'username'))
    )

    # Fill the login form
    username_field = driver.find_element(By.ID, 'username')
    password_field = driver.find_element(By.ID, 'password')
    submit_button = driver.find_element(By.ID, 'submit-button')

    username_field.send_keys('myuser')
    password_field.send_keys('mypassword')
    submit_button.click()

    # Wait for navigation and check if login was successful
    WebDriverWait(driver, 10).until(
        EC.presence_of_element_located((By.CLASS_NAME, 'user-profile'))
    )

    print('Login successful!')

finally:
    driver.quit()
```

 

   

 

## Session Persistence

Maintain browser state across connections using the `session` parameter:

 ```
from selenium import webdriver
from selenium.webdriver.chrome.options import Options

API_KEY = ''
SESSION_ID = 'my-persistent-session'

# First connection: Login and set cookies
def first_connection():
    chrome_options = Options()
    chrome_options.add_experimental_option(
        "debuggerAddress",
        f'wss://browser.scrapfly.io?api_key={API_KEY}&session={SESSION_ID}'
    )

    driver = webdriver.Remote(
        command_executor='http://localhost:9515',
        options=chrome_options
    )

    try:
        driver.get('https://web-scraping.dev/login')
        # ... perform login ...
        print('Session created and saved')
    finally:
        driver.quit()  # Session is preserved on server


# Second connection: Reuse the logged-in session
def second_connection():
    chrome_options = Options()
    chrome_options.add_experimental_option(
        "debuggerAddress",
        f'wss://browser.scrapfly.io?api_key={API_KEY}&session={SESSION_ID}'
    )

    driver = webdriver.Remote(
        command_executor='http://localhost:9515',
        options=chrome_options
    )

    try:
        driver.get('https://web-scraping.dev/dashboard')
        print('Already logged in from previous session!')
        # Cookies and storage from first connection are preserved
    finally:
        driver.quit()


# Run both connections
first_connection()
second_connection()
```

 

   

 

## Proxy Options

 | Proxy Pool | Use Case | Cost |
|---|---|---|
| `datacenter` | General scraping, high speed, lower cost | 1 credits/30s + 7 credits/MB |
| `residential` | Protected sites, geo-targeting, anti-bot bypass | 1 credits/30s + 52 credits/MB |

## Best Practices

- **Use explicit waits** - Leverage `WebDriverWait` and `expected_conditions` for reliable element interactions
- **Handle disconnects** - Wrap connections in try/except (Python) or try/catch (JavaScript)
- **Close browsers** - Always call `driver.quit()` to stop billing and release resources
- **Use sessions wisely** - Reuse sessions for multi-step flows to maintain login state
- **Leverage Selenium features** - Use element locators (By.ID, By.CLASS\_NAME, etc.) and built-in waits
- **Upgrade to Selenium 4.10+** - For better CDP and WebDriver BiDi support
 
## Troubleshooting

#####    Error: "WebDriverException: invalid argument"  

 

**Cause:** CDP connection via `debuggerAddress` may not be supported in older Selenium versions.

**Solution:** Upgrade to Selenium 4.10+ or use WebDriver BiDi connection method.

 ```
pip install --upgrade selenium
```

 

   

 

 

 

 

#####    Connection Timeout  

 

**Cause:** WebSocket connection to Cloud Browser failed or timed out.

**Solution:** Verify your API key and check network connectivity. Ensure your firewall allows WebSocket connections.

 ```
# Test WebSocket connectivity
curl -i -N \
  -H "Connection: Upgrade" \
  -H "Upgrade: websocket" \
  -H "Sec-WebSocket-Version: 13" \
  -H "Sec-WebSocket-Key: test" \
  "https://browser.scrapfly.io?api_key="
```

 

   

 

 

 

 

#####    Session Not Persisting  

 

**Cause:** Session ID not provided or session expired.

**Solution:** Always include the same `session` parameter in the WebSocket URL for persistent sessions.

 ```
SESSION_ID = "my-persistent-session"
BROWSER_WS = f"wss://browser.scrapfly.io?api_key={API_KEY}&session={SESSION_ID}"
```

 

   

 

**Note:** Sessions expire after 1 hour of inactivity by default.

 

 

 

 

## Related

- [Cloud Browser Getting Started](https://scrapfly.io/docs/cloud-browser-api/getting-started)
- [Billing &amp; Pricing](https://scrapfly.io/docs/cloud-browser-api/billing)
- [Puppeteer Integration](https://scrapfly.io/docs/cloud-browser-api/puppeteer)
- [Playwright Integration](https://scrapfly.io/docs/cloud-browser-api/playwright)
- [Selenium Documentation](https://www.selenium.dev/documentation/)