# Scrapfly Documentation

## Table of Contents

### Dashboard

- [Intro](https://scrapfly.io/docs)
- [Project](https://scrapfly.io/docs/project)
- [Account](https://scrapfly.io/docs/account)
- [Workspace & Team](https://scrapfly.io/docs/workspace-and-team)
- [Billing](https://scrapfly.io/docs/billing)

### Products

#### MCP Server

- [Getting Started](https://scrapfly.io/docs/mcp/getting-started)
- [Tools & API Spec](https://scrapfly.io/docs/mcp/tools)
- [Authentication](https://scrapfly.io/docs/mcp/authentication)
- [Examples & Use Cases](https://scrapfly.io/docs/mcp/examples)
- [FAQ](https://scrapfly.io/docs/mcp/faq)
##### Integrations

- [Overview](https://scrapfly.io/docs/mcp/integrations)
- [Claude Desktop](https://scrapfly.io/docs/mcp/integrations/claude-desktop)
- [Claude Code](https://scrapfly.io/docs/mcp/integrations/claude-code)
- [ChatGPT](https://scrapfly.io/docs/mcp/integrations/chatgpt)
- [Cursor](https://scrapfly.io/docs/mcp/integrations/cursor)
- [Cline](https://scrapfly.io/docs/mcp/integrations/cline)
- [Windsurf](https://scrapfly.io/docs/mcp/integrations/windsurf)
- [Zed](https://scrapfly.io/docs/mcp/integrations/zed)
- [Roo Code](https://scrapfly.io/docs/mcp/integrations/roo-code)
- [VS Code](https://scrapfly.io/docs/mcp/integrations/vscode)
- [LangChain](https://scrapfly.io/docs/mcp/integrations/langchain)
- [LlamaIndex](https://scrapfly.io/docs/mcp/integrations/llamaindex)
- [CrewAI](https://scrapfly.io/docs/mcp/integrations/crewai)
- [OpenAI](https://scrapfly.io/docs/mcp/integrations/openai)
- [n8n](https://scrapfly.io/docs/mcp/integrations/n8n)
- [Make](https://scrapfly.io/docs/mcp/integrations/make)
- [Zapier](https://scrapfly.io/docs/mcp/integrations/zapier)
- [Vapi AI](https://scrapfly.io/docs/mcp/integrations/vapi)
- [Agent Builder](https://scrapfly.io/docs/mcp/integrations/agent-builder)
- [Custom Client](https://scrapfly.io/docs/mcp/integrations/custom-client)


#### Web Scraping API

- [Getting Started](https://scrapfly.io/docs/scrape-api/getting-started)
- [API Specification](https://scrapfly.io/docs/scrape-api/specification)
- [Monitoring](https://scrapfly.io/docs/monitoring)
- [Customize Request](https://scrapfly.io/docs/scrape-api/custom)
- [Debug](https://scrapfly.io/docs/scrape-api/debug)
- [Anti Scraping Protection](https://scrapfly.io/docs/scrape-api/anti-scraping-protection)
- [Proxy](https://scrapfly.io/docs/scrape-api/proxy)
- [Proxy Mode](https://scrapfly.io/docs/scrape-api/proxy-mode)
- [Proxy Mode - Screaming Frog](https://scrapfly.io/docs/scrape-api/proxy-mode/screaming-frog)
- [Proxy Mode - Apify](https://scrapfly.io/docs/scrape-api/proxy-mode/apify)
- [(Auto) Data Extraction](https://scrapfly.io/docs/scrape-api/extraction)
- [Javascript Rendering](https://scrapfly.io/docs/scrape-api/javascript-rendering)
- [Javascript Scenario](https://scrapfly.io/docs/scrape-api/javascript-scenario)
- [SSL](https://scrapfly.io/docs/scrape-api/ssl)
- [DNS](https://scrapfly.io/docs/scrape-api/dns)
- [Cache](https://scrapfly.io/docs/scrape-api/cache)
- [Session](https://scrapfly.io/docs/scrape-api/session)
- [Webhook](https://scrapfly.io/docs/scrape-api/webhook)
- [Screenshot](https://scrapfly.io/docs/scrape-api/screenshot)
- [Errors](https://scrapfly.io/docs/scrape-api/errors)
- [Timeout](https://scrapfly.io/docs/scrape-api/understand-timeout)
- [Throttling](https://scrapfly.io/docs/throttling)
- [Troubleshoot](https://scrapfly.io/docs/scrape-api/troubleshoot)
- [Billing](https://scrapfly.io/docs/scrape-api/billing)
- [FAQ](https://scrapfly.io/docs/scrape-api/faq)

#### Crawler API

- [Getting Started](https://scrapfly.io/docs/crawler-api/getting-started)
- [API Specification](https://scrapfly.io/docs/crawler-api/specification)
- [Retrieving Results](https://scrapfly.io/docs/crawler-api/results)
- [WARC Format](https://scrapfly.io/docs/crawler-api/warc-format)
- [Data Extraction](https://scrapfly.io/docs/crawler-api/extraction-rules)
- [Webhook](https://scrapfly.io/docs/crawler-api/webhook)
- [Billing](https://scrapfly.io/docs/crawler-api/billing)
- [Errors](https://scrapfly.io/docs/crawler-api/errors)
- [Troubleshoot](https://scrapfly.io/docs/crawler-api/troubleshoot)
- [FAQ](https://scrapfly.io/docs/crawler-api/faq)

#### Screenshot API

- [Getting Started](https://scrapfly.io/docs/screenshot-api/getting-started)
- [API Specification](https://scrapfly.io/docs/screenshot-api/specification)
- [Accessibility Testing](https://scrapfly.io/docs/screenshot-api/accessibility)
- [Webhook](https://scrapfly.io/docs/screenshot-api/webhook)
- [Billing](https://scrapfly.io/docs/screenshot-api/billing)
- [Errors](https://scrapfly.io/docs/screenshot-api/errors)

#### Extraction API

- [Getting Started](https://scrapfly.io/docs/extraction-api/getting-started)
- [API Specification](https://scrapfly.io/docs/extraction-api/specification)
- [Rules Template](https://scrapfly.io/docs/extraction-api/rules-and-template)
- [LLM Extraction](https://scrapfly.io/docs/extraction-api/llm-prompt)
- [AI Auto Extraction](https://scrapfly.io/docs/extraction-api/automatic-ai)
- [Webhook](https://scrapfly.io/docs/extraction-api/webhook)
- [Billing](https://scrapfly.io/docs/extraction-api/billing)
- [Errors](https://scrapfly.io/docs/extraction-api/errors)
- [FAQ](https://scrapfly.io/docs/extraction-api/faq)

#### Proxy Saver

- [Getting Started](https://scrapfly.io/docs/proxy-saver/getting-started)
- [Fingerprints](https://scrapfly.io/docs/proxy-saver/fingerprints)
- [Optimizations](https://scrapfly.io/docs/proxy-saver/optimizations)
- [SSL Certificates](https://scrapfly.io/docs/proxy-saver/certificates)
- [Protocols](https://scrapfly.io/docs/proxy-saver/protocols)
- [Pacfile](https://scrapfly.io/docs/proxy-saver/pacfile)
- [Secure Credentials](https://scrapfly.io/docs/proxy-saver/security)
- [Billing](https://scrapfly.io/docs/proxy-saver/billing)

#### Cloud Browser API

- [Getting Started](https://scrapfly.io/docs/cloud-browser-api/getting-started)
- [Proxy & Geo-Targeting](https://scrapfly.io/docs/cloud-browser-api/proxy)
- [Unblock API](https://scrapfly.io/docs/cloud-browser-api/unblock)
- [File Downloads](https://scrapfly.io/docs/cloud-browser-api/file-downloads)
- [Session Resume](https://scrapfly.io/docs/cloud-browser-api/session-resume)
- [Human-in-the-Loop](https://scrapfly.io/docs/cloud-browser-api/human-in-the-loop)
- [Debug Mode](https://scrapfly.io/docs/cloud-browser-api/debug-mode)
- [Bring Your Own Proxy](https://scrapfly.io/docs/cloud-browser-api/bring-your-own-proxy)
- [Browser Extensions](https://scrapfly.io/docs/cloud-browser-api/extensions)
##### Integrations

- [Puppeteer](https://scrapfly.io/docs/cloud-browser-api/puppeteer)
- [Playwright](https://scrapfly.io/docs/cloud-browser-api/playwright)
- [Selenium](https://scrapfly.io/docs/cloud-browser-api/selenium)
- [Vercel Agent Browser](https://scrapfly.io/docs/cloud-browser-api/agent-browser)
- [Browser Use](https://scrapfly.io/docs/cloud-browser-api/browser-use)
- [Stagehand](https://scrapfly.io/docs/cloud-browser-api/stagehand)
- [Vibium](https://scrapfly.io/docs/cloud-browser-api/vibium)

- [Billing](https://scrapfly.io/docs/cloud-browser-api/billing)
- [Errors](https://scrapfly.io/docs/cloud-browser-api/errors)


### Tools

- [Antibot Detector](https://scrapfly.io/docs/tools/antibot-detector)

### SDK

- [Golang](https://scrapfly.io/docs/sdk/golang)
- [Python](https://scrapfly.io/docs/sdk/python)
- [TypeScript](https://scrapfly.io/docs/sdk/typescript)
- [Scrapy](https://scrapfly.io/docs/sdk/scrapy)

### Integrations

- [Getting Started](https://scrapfly.io/docs/integration/getting-started)
- [LangChain](https://scrapfly.io/docs/integration/langchain)
- [LlamaIndex](https://scrapfly.io/docs/integration/llamaindex)
- [CrewAI](https://scrapfly.io/docs/integration/crewai)
- [Zapier](https://scrapfly.io/docs/integration/zapier)
- [Make](https://scrapfly.io/docs/integration/make)
- [n8n](https://scrapfly.io/docs/integration/n8n)

### Academy

- [Overview](https://scrapfly.io/academy)
- [Web Scraping Overview](https://scrapfly.io/academy/scraping-overview)
- [Tools](https://scrapfly.io/academy/tools-overview)
- [Reverse Engineering](https://scrapfly.io/academy/reverse-engineering)
- [Static Scraping](https://scrapfly.io/academy/static-scraping)
- [HTML Parsing](https://scrapfly.io/academy/html-parsing)
- [Dynamic Scraping](https://scrapfly.io/academy/dynamic-scraping)
- [Hidden API Scraping](https://scrapfly.io/academy/hidden-api-scraping)
- [Headless Browsers](https://scrapfly.io/academy/headless-browsers)
- [Hidden Web Data](https://scrapfly.io/academy/hidden-web-data)
- [JSON Parsing](https://scrapfly.io/academy/json-parsing)
- [Data Processing](https://scrapfly.io/academy/data-processing)
- [Scaling](https://scrapfly.io/academy/scaling)
- [Walkthrough Summary](https://scrapfly.io/academy/walkthrough-summary)
- [Scraper Blocking](https://scrapfly.io/academy/scraper-blocking)
- [Proxies](https://scrapfly.io/academy/proxies)

---

# Session Resume

 [  View as markdown ](https://scrapfly.io/?view=markdown)   Copy for LLM    Copy for LLM  [     Open in ChatGPT ](https://chatgpt.com/?hints=search&prompt=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fcloud-browser-api%2Fsession-resume%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Claude ](https://claude.ai/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fcloud-browser-api%2Fsession-resume%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Perplexity ](https://www.perplexity.ai/search/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fcloud-browser-api%2Fsession-resume%20so%20I%20can%20ask%20questions%20about%20it.) 

 

 

 1. [Cloud Browser](https://scrapfly.io/docs/cloud-browser-api/getting-started)
2. Session Resume
 
  **Session Resume** allows you to reconnect to an existing browser session by using the same session identifier. All browser state including cookies, localStorage, sessionStorage, and navigation history is preserved across connections.

  **Beta Feature:** Cloud Browser is currently in beta. 

## Requirements

 For session resume to work, your Cloud Browser connection **must** include two parameters:

  **Required Parameters:**- `session` — A stable session identifier. This is how Cloud Browser matches reconnections to existing browser instances.
- `auto_close=false` — Prevents the browser from being terminated when you disconnect. Without this, the browser shuts down immediately on disconnect and there is nothing to resume.
 
 

 ```
wss://browser.scrapfly.io?api_key=YOUR_API_KEY&session=my-session-id&auto_close=false
```

 

   

 

 Both parameters are required for session resume. If you omit `session`, the browser gets an anonymous run ID and cannot be reconnected. If you omit `auto_close=false` (defaults to `true`), the browser terminates the moment your script disconnects.

## How It Works

 When you connect to Cloud Browser with a `session` parameter:

1. If the session ID is **new**, a fresh browser instance is created and associated with that session ID
2. If the session ID **already exists** and is still active, you reconnect to the existing browser instance
3. All browser state (cookies, localStorage, tabs, navigation history) is **preserved** between connections
 
  **Use Case:** Perfect for multi-step workflows, debugging scrapes, or long-running automations that need to pause and resume. 

## Basic Usage

 Connect, perform actions, disconnect with `browser.disconnect()` (keeps browser alive), then reconnect to the same session using the same `session` ID:

    Python    JavaScript  

  ```
from playwright.sync_api import sync_playwright
import time

API_KEY = ''
SESSION_ID = 'my-persistent-session'
BROWSER_WS = f'wss://browser.scrapfly.io?api_key={API_KEY}&session={SESSION_ID}&auto_close=false'

def first_connection():
    print('=== First Connection ===')
    with sync_playwright() as p:
        browser = p.chromium.connect_over_cdp(BROWSER_WS)
        context = browser.contexts[0]
        page = context.new_page()
        page.goto('https://web-scraping.dev')

        # Set a cookie
        context.add_cookies([{
            'name': 'session_token',
            'value': 'abc123',
            'domain': 'web-scraping.dev',
            'path': '/'
        }])

        print('Cookies set, disconnecting...')
        browser.close()  # Disconnects CDP — browser stays alive (auto_close=false)

def second_connection():
    print('=== Second Connection (Resume) ===')
    with sync_playwright() as p:
        browser = p.chromium.connect_over_cdp(BROWSER_WS)
        context = browser.contexts[0]
        page = context.pages[0] if context.pages else context.new_page()

        # Cookies are still there!
        cookies = context.cookies('https://web-scraping.dev')
        print('Cookies from previous session:', cookies)

        browser.close()  # Disconnects CDP

# Terminate the session when fully done
import requests
requests.post(f'https://browser.scrapfly.io/session/{SESSION_ID}/stop?key={API_KEY}')

first_connection()
time.sleep(2)  # Wait a bit, then reconnect
second_connection()
```

 

   

 

 

 ```
const puppeteer = require('puppeteer-core');

const API_KEY = '';
const SESSION_ID = 'my-persistent-session';
const BROWSER_WS = `wss://browser.scrapfly.io?api_key=${API_KEY}&session=${SESSION_ID}&auto_close=false`;

async function firstConnection() {
    console.log('=== First Connection ===');

    const browser = await puppeteer.connect({
        browserWSEndpoint: BROWSER_WS,
    });

    const page = await browser.newPage();
    await page.goto('https://web-scraping.dev');

    // Set a cookie
    await page.setCookie({
        name: 'session_token',
        value: 'abc123',
        domain: 'web-scraping.dev'
    });

    console.log('Cookies set, disconnecting...');
    await browser.disconnect(); // Disconnect but keep browser alive
}

async function secondConnection() {
    console.log('=== Second Connection (Resume) ===');

    const browser = await puppeteer.connect({
        browserWSEndpoint: BROWSER_WS,
    });

    const page = (await browser.pages())[0] || await browser.newPage();

    // Cookies are still there!
    const cookies = await page.cookies('https://web-scraping.dev');
    console.log('Cookies from previous session:', cookies);

    await browser.close(); // Final close — terminates the browser
}

async function run() {
    await firstConnection();
    await new Promise(resolve => setTimeout(resolve, 2000));
    await secondConnection();
}

run();
```

 

   

 

 

 

## Connect vs Disconnect vs Close

 With `auto_close=false`, the browser keeps running after you disconnect. Understanding the difference between **disconnect** and **close** is critical:

 | Action | Browser Stays Alive? | Can Reconnect? | Billing Continues? |
|---|---|---|---|
| `browser.disconnect()` (Puppeteer) | Yes | Yes | Yes |
| `browser.close()` (Playwright CDP close) | Yes | Yes | Yes |
| `browser.close()` (Puppeteer) | No | No | No |
| `POST /session/{session_id}/stop` | No | No | No |

  **Billing:** With `auto_close=false`, the browser continues running (and billing) after you disconnect. Always terminate sessions when finished to avoid unexpected charges. 

### Terminating Sessions

When you are completely finished with a session, terminate it using one of these methods:

    Python    JavaScript    REST API  

  ```
# To disconnect without closing (resume later):
browser.close()  # Playwright CDP close only disconnects — browser stays alive

# To terminate the session via REST API:
import requests

API_KEY = ''
SESSION_ID = 'my-session-id'

requests.post(
    f'https://browser.scrapfly.io/session/{SESSION_ID}/stop?key={API_KEY}'
)
```

 

   

 

 

 ```
// To disconnect without closing (resume later):
await browser.disconnect(); // Puppeteer only — browser stays alive

// To terminate the browser AND session:
await browser.close(); // Puppeteer — sends Browser.close CDP command

// Or terminate via REST API (works from any context):
const SESSION_ID = 'my-session-id';
await fetch(
    `https://browser.scrapfly.io/session/${SESSION_ID}/stop?api_key=`,
    { method: 'POST' }
);
```

 

   

 

 

 ```
# Terminate a session by its session ID
curl -X POST 'https://browser.scrapfly.io/session/my-session-id/stop?key=YOUR_API_KEY'
```

 

   

 

 

 

## Session Timeout

 Sessions have a maximum duration controlled by the `timeout` parameter (in seconds):

 | Parameter | Default | Maximum | Description |
|---|---|---|---|
| `timeout` | 900s (15 minutes) | 1800s (30 minutes) | Maximum session duration before forced termination |

Example with custom timeout:

 ```
wss://browser.scrapfly.io?api_key=&session=my-session&auto_close=false&timeout=1800
```

 

   

 

  **Tip:** Sessions are automatically terminated when the timeout is reached, even if still connected. Plan your automation workflows accordingly. 

## Common Use Cases

### Multi-Step Workflows

Break complex automations into separate steps while maintaining state:

    Python    JavaScript  

  ```
from playwright.sync_api import sync_playwright
import time

API_KEY = ''
SESSION_ID = 'workflow-123'
BROWSER_WS = f'wss://browser.scrapfly.io?api_key={API_KEY}&session={SESSION_ID}&auto_close=false'

# Step 1: Login
def login():
    with sync_playwright() as p:
        browser = p.chromium.connect_over_cdp(BROWSER_WS)
        context = browser.contexts[0]
        page = context.new_page()
        page.goto('https://example.com/login')
        page.fill('#username', 'user@example.com')
        page.fill('#password', 'secret')
        page.click('#login-btn')
        page.wait_for_load_state('networkidle')
        browser.close()  # Disconnects CDP — browser stays alive

# Step 2: Scrape data (session still has login cookies)
def scrape_data():
    with sync_playwright() as p:
        browser = p.chromium.connect_over_cdp(BROWSER_WS)
        context = browser.contexts[0]
        page = context.pages[0] if context.pages else context.new_page()
        page.goto('https://example.com/dashboard')
        data = page.eval_on_selector_all('.data-item', 'items => items.map(i => i.textContent)')

        # Terminate session when done
        import requests
        requests.post(f'https://browser.scrapfly.io/session/{SESSION_ID}/stop?key={API_KEY}')

        return data

login()
time.sleep(1)
data = scrape_data()
print('Scraped data:', data)
```

 

   

 

 

 ```
const puppeteer = require('puppeteer-core');

const API_KEY = '';
const SESSION_ID = 'workflow-123';
const BROWSER_WS = `wss://browser.scrapfly.io?api_key=${API_KEY}&session=${SESSION_ID}&auto_close=false`;

// Step 1: Login
async function login() {
    const browser = await puppeteer.connect({ browserWSEndpoint: BROWSER_WS });
    const page = await browser.newPage();
    await page.goto('https://example.com/login');
    await page.type('#username', 'user@example.com');
    await page.type('#password', 'secret');
    await page.click('#login-btn');
    await page.waitForNavigation();
    await browser.disconnect(); // Keep session alive
}

// Step 2: Scrape data (session still has login cookies)
async function scrapeData() {
    const browser = await puppeteer.connect({ browserWSEndpoint: BROWSER_WS });
    const page = (await browser.pages())[0];
    await page.goto('https://example.com/dashboard');
    const data = await page.$$eval('.data-item', items => items.map(i => i.textContent));
    await browser.close(); // Terminate session
    return data;
}

async function run() {
    await login();
    await new Promise(resolve => setTimeout(resolve, 1000));
    const data = await scrapeData();
    console.log('Scraped data:', data);
}

run();
```

 

   

 

 

 

### Debugging Scrapes

Reconnect to a failed session to investigate issues:

    Python    JavaScript  

  ```
from playwright.sync_api import sync_playwright
import time

API_KEY = ''
SESSION_ID = f'debug-session-{int(time.time())}'
BROWSER_WS = f'wss://browser.scrapfly.io?api_key={API_KEY}&session={SESSION_ID}&auto_close=false'

try:
    with sync_playwright() as p:
        browser = p.chromium.connect_over_cdp(BROWSER_WS)
        context = browser.contexts[0]
        page = context.new_page()
        page.goto('https://web-scraping.dev')

        # Some operation that might fail
        page.click('.non-existent-selector')  # This will throw

        browser.close()
except Exception as error:
    print(f'Error occurred: {error}')
    print(f'Session ID: {SESSION_ID}')
    print(f'The browser is still running — reconnect via the dashboard to debug')
    # Don't close browser — leave it for manual inspection
    # Use Human-in-the-Loop to connect and investigate
```

 

   

 

 

 ```
const puppeteer = require('puppeteer-core');

const API_KEY = '';
const SESSION_ID = 'debug-session-' + Date.now();
const BROWSER_WS = `wss://browser.scrapfly.io?api_key=${API_KEY}&session=${SESSION_ID}&auto_close=false`;

async function scrapeWithDebug() {
    try {
        const browser = await puppeteer.connect({ browserWSEndpoint: BROWSER_WS });
        const page = await browser.newPage();
        await page.goto('https://web-scraping.dev');

        // Some operation that might fail
        await page.click('.non-existent-selector'); // This will throw

        await browser.close();
    } catch (error) {
        console.error('Error occurred:', error.message);
        console.log('Session ID:', SESSION_ID);
        console.log('The browser is still running — reconnect via the dashboard to debug');
        // Don't close browser — leave it for manual inspection
        // Use Human-in-the-Loop to connect and investigate
    }
}

scrapeWithDebug();
```

 

   

 

 

 

  **Pro Tip:** Combine Session Resume with [Human-in-the-Loop](https://scrapfly.io/docs/cloud-browser-api/human-in-the-loop) to manually debug failed sessions in your browser dashboard. 

### Long-Running Tasks

 For tasks that exceed a single session timeout, split your work into batches and reconnect between each batch:

    Python    JavaScript  

  ```
from playwright.sync_api import sync_playwright
import time

API_KEY = ''
SESSION_ID = f'long-task-{int(time.time())}'
BROWSER_WS = f'wss://browser.scrapfly.io?api_key={API_KEY}&session={SESSION_ID}&auto_close=false&timeout=1800'

page_number = 1
max_pages = 100

while page_number <= max_pages:
    with sync_playwright() as p:
        browser = p.chromium.connect_over_cdp(BROWSER_WS)
        context = browser.contexts[0]
        page = context.pages[0] if context.pages else context.new_page()

        # Process 10 pages per batch (well under 30-minute limit)
        for _ in range(10):
            if page_number > max_pages:
                break
            page.goto(f'https://web-scraping.dev/page/{page_number}')
            print(f'Processing page {page_number}')
            # ... scrape data ...
            page_number += 1

        browser.close()  # Disconnect — browser stays alive

    if page_number <= max_pages:
        time.sleep(1)  # Brief pause between batches

# Terminate session when completely done
import requests
requests.post(f'https://browser.scrapfly.io/session/{SESSION_ID}/stop?key={API_KEY}')
```

 

   

 

 

 ```
const puppeteer = require('puppeteer-core');

const API_KEY = '';
const SESSION_ID = 'long-task-' + Date.now();
const BROWSER_WS = `wss://browser.scrapfly.io?api_key=${API_KEY}&session=${SESSION_ID}&auto_close=false&timeout=1800`;

async function longRunningTask() {
    let pageNumber = 1;
    const maxPages = 100;

    while (pageNumber <= maxPages) {
        const browser = await puppeteer.connect({ browserWSEndpoint: BROWSER_WS });
        const page = (await browser.pages())[0] || await browser.newPage();

        // Process 10 pages per batch (well under 30-minute limit)
        for (let i = 0; i < 10 && pageNumber <= maxPages; i++) {
            await page.goto(`https://web-scraping.dev/page/${pageNumber}`);
            console.log('Processing page', pageNumber);
            // ... scrape data ...
            pageNumber++;
        }

        if (pageNumber > maxPages) {
            await browser.close(); // Final close — terminates session
        } else {
            await browser.disconnect(); // Keep session alive for next batch
            await new Promise(resolve => setTimeout(resolve, 1000));
        }
    }
}

longRunningTask();
```

 

   

 

 

 

## Monitoring Sessions

 View all active and recent sessions in the Cloud Browser dashboard:

 [ View Active Sessions ](https://scrapfly.io/dashboard/cloud-browser/sessions) The dashboard shows:

- Session ID and status (active, idle, terminated)
- Session duration and remaining time
- Connection state (connected, disconnected)
- Attachment type (automated agent, human operator, none)
- Bandwidth usage and cost
 
## Troubleshooting

#####    Session Not Found on Reconnect  

 

**Cause:** Session expired or was terminated.

**Solution:**

- Ensure `auto_close=false` when disconnecting
- Check session timeout hasn't been reached (default 15 minutes, max 30 minutes)
- Verify session ID is exactly the same (case-sensitive)
- Check dashboard for session status
 
 

 

 

#####    Unexpected Billing Charges  

 

**Cause:** Sessions with `auto_close=false` continue running (and billing) after disconnect.

**Solution:**

- Always terminate sessions when finished — use `browser.close()` (Puppeteer) or call `POST /session/{session_id}/stop`
- Use reasonable `timeout` values to prevent runaway sessions
- Monitor active sessions in the [Sessions dashboard](https://scrapfly.io/dashboard/cloud-browser/sessions)
- Terminate abandoned sessions via the dashboard or the [REST API](#closing-sessions)
 
 **Billing reminder:** Sessions are billed per 30 seconds (rounded up) plus bandwidth. See [Cloud Browser Billing](https://scrapfly.io/docs/cloud-browser-api/billing) for details.

 

 

 

#####    Session State Not Preserved  

 

**Cause:** Session ID changed, `auto_close=false` was not set, or browser was terminated instead of disconnected.

**Solution:**

- Use the exact same `session` parameter value
- Ensure `auto_close=false` is set on every connection
- In Puppeteer: use `browser.disconnect()` to preserve state (not `browser.close()` which terminates)
- In Playwright: `browser.close()` only disconnects CDP — the browser stays alive
 
 ```
// Puppeteer:
await browser.disconnect(); // CORRECT — keeps browser alive
await browser.close();      // WRONG — terminates the browser

// Playwright (Python):
browser.close()  // OK — only disconnects CDP, browser stays alive
```

 

   

 

 

 

 

 

## Related Documentation

- [Getting Started](https://scrapfly.io/docs/cloud-browser-api/getting-started) - Introduction to Cloud Browser API
- [Human-in-the-Loop](https://scrapfly.io/docs/cloud-browser-api/human-in-the-loop) - Manually control browser sessions for debugging
- [Billing](https://scrapfly.io/docs/cloud-browser-api/billing) - Understand session costs and billing
- [Session Dashboard](https://scrapfly.io/dashboard/cloud-browser/sessions) - Monitor and manage active sessions
- [Error Reference](https://scrapfly.io/docs/cloud-browser-api/errors) - Troubleshoot common errors