# Scrapfly Documentation

## Table of Contents

### Dashboard

- [Intro](https://scrapfly.io/docs)
- [Project](https://scrapfly.io/docs/project)
- [Account](https://scrapfly.io/docs/account)
- [Workspace & Team](https://scrapfly.io/docs/workspace-and-team)
- [Billing](https://scrapfly.io/docs/billing)

### Products

#### MCP Server

- [Getting Started](https://scrapfly.io/docs/mcp/getting-started)
- [Tools & API Spec](https://scrapfly.io/docs/mcp/tools)
- [Authentication](https://scrapfly.io/docs/mcp/authentication)
- [Examples & Use Cases](https://scrapfly.io/docs/mcp/examples)
- [FAQ](https://scrapfly.io/docs/mcp/faq)
##### Integrations

- [Overview](https://scrapfly.io/docs/mcp/integrations)
- [Claude Desktop](https://scrapfly.io/docs/mcp/integrations/claude-desktop)
- [Claude Code](https://scrapfly.io/docs/mcp/integrations/claude-code)
- [ChatGPT](https://scrapfly.io/docs/mcp/integrations/chatgpt)
- [Cursor](https://scrapfly.io/docs/mcp/integrations/cursor)
- [Cline](https://scrapfly.io/docs/mcp/integrations/cline)
- [Windsurf](https://scrapfly.io/docs/mcp/integrations/windsurf)
- [Zed](https://scrapfly.io/docs/mcp/integrations/zed)
- [Roo Code](https://scrapfly.io/docs/mcp/integrations/roo-code)
- [VS Code](https://scrapfly.io/docs/mcp/integrations/vscode)
- [LangChain](https://scrapfly.io/docs/mcp/integrations/langchain)
- [LlamaIndex](https://scrapfly.io/docs/mcp/integrations/llamaindex)
- [CrewAI](https://scrapfly.io/docs/mcp/integrations/crewai)
- [OpenAI](https://scrapfly.io/docs/mcp/integrations/openai)
- [n8n](https://scrapfly.io/docs/mcp/integrations/n8n)
- [Make](https://scrapfly.io/docs/mcp/integrations/make)
- [Zapier](https://scrapfly.io/docs/mcp/integrations/zapier)
- [Vapi AI](https://scrapfly.io/docs/mcp/integrations/vapi)
- [Agent Builder](https://scrapfly.io/docs/mcp/integrations/agent-builder)
- [Custom Client](https://scrapfly.io/docs/mcp/integrations/custom-client)


#### Web Scraping API

- [Getting Started](https://scrapfly.io/docs/scrape-api/getting-started)
- [API Specification](https://scrapfly.io/docs/scrape-api/specification)
- [Monitoring](https://scrapfly.io/docs/monitoring)
- [Customize Request](https://scrapfly.io/docs/scrape-api/custom)
- [Debug](https://scrapfly.io/docs/scrape-api/debug)
- [Anti Scraping Protection](https://scrapfly.io/docs/scrape-api/anti-scraping-protection)
- [Proxy](https://scrapfly.io/docs/scrape-api/proxy)
- [Proxy Mode](https://scrapfly.io/docs/scrape-api/proxy-mode)
- [Proxy Mode - Screaming Frog](https://scrapfly.io/docs/scrape-api/proxy-mode/screaming-frog)
- [Proxy Mode - Apify](https://scrapfly.io/docs/scrape-api/proxy-mode/apify)
- [(Auto) Data Extraction](https://scrapfly.io/docs/scrape-api/extraction)
- [Javascript Rendering](https://scrapfly.io/docs/scrape-api/javascript-rendering)
- [Javascript Scenario](https://scrapfly.io/docs/scrape-api/javascript-scenario)
- [SSL](https://scrapfly.io/docs/scrape-api/ssl)
- [DNS](https://scrapfly.io/docs/scrape-api/dns)
- [Cache](https://scrapfly.io/docs/scrape-api/cache)
- [Session](https://scrapfly.io/docs/scrape-api/session)
- [Webhook](https://scrapfly.io/docs/scrape-api/webhook)
- [Screenshot](https://scrapfly.io/docs/scrape-api/screenshot)
- [Errors](https://scrapfly.io/docs/scrape-api/errors)
- [Timeout](https://scrapfly.io/docs/scrape-api/understand-timeout)
- [Throttling](https://scrapfly.io/docs/throttling)
- [Troubleshoot](https://scrapfly.io/docs/scrape-api/troubleshoot)
- [Billing](https://scrapfly.io/docs/scrape-api/billing)
- [FAQ](https://scrapfly.io/docs/scrape-api/faq)

#### Crawler API

- [Getting Started](https://scrapfly.io/docs/crawler-api/getting-started)
- [API Specification](https://scrapfly.io/docs/crawler-api/specification)
- [Retrieving Results](https://scrapfly.io/docs/crawler-api/results)
- [WARC Format](https://scrapfly.io/docs/crawler-api/warc-format)
- [Data Extraction](https://scrapfly.io/docs/crawler-api/extraction-rules)
- [Webhook](https://scrapfly.io/docs/crawler-api/webhook)
- [Billing](https://scrapfly.io/docs/crawler-api/billing)
- [Errors](https://scrapfly.io/docs/crawler-api/errors)
- [Troubleshoot](https://scrapfly.io/docs/crawler-api/troubleshoot)
- [FAQ](https://scrapfly.io/docs/crawler-api/faq)

#### Screenshot API

- [Getting Started](https://scrapfly.io/docs/screenshot-api/getting-started)
- [API Specification](https://scrapfly.io/docs/screenshot-api/specification)
- [Accessibility Testing](https://scrapfly.io/docs/screenshot-api/accessibility)
- [Webhook](https://scrapfly.io/docs/screenshot-api/webhook)
- [Billing](https://scrapfly.io/docs/screenshot-api/billing)
- [Errors](https://scrapfly.io/docs/screenshot-api/errors)

#### Extraction API

- [Getting Started](https://scrapfly.io/docs/extraction-api/getting-started)
- [API Specification](https://scrapfly.io/docs/extraction-api/specification)
- [Rules Template](https://scrapfly.io/docs/extraction-api/rules-and-template)
- [LLM Extraction](https://scrapfly.io/docs/extraction-api/llm-prompt)
- [AI Auto Extraction](https://scrapfly.io/docs/extraction-api/automatic-ai)
- [Webhook](https://scrapfly.io/docs/extraction-api/webhook)
- [Billing](https://scrapfly.io/docs/extraction-api/billing)
- [Errors](https://scrapfly.io/docs/extraction-api/errors)
- [FAQ](https://scrapfly.io/docs/extraction-api/faq)

#### Proxy Saver

- [Getting Started](https://scrapfly.io/docs/proxy-saver/getting-started)
- [Fingerprints](https://scrapfly.io/docs/proxy-saver/fingerprints)
- [Optimizations](https://scrapfly.io/docs/proxy-saver/optimizations)
- [SSL Certificates](https://scrapfly.io/docs/proxy-saver/certificates)
- [Protocols](https://scrapfly.io/docs/proxy-saver/protocols)
- [Pacfile](https://scrapfly.io/docs/proxy-saver/pacfile)
- [Secure Credentials](https://scrapfly.io/docs/proxy-saver/security)
- [Billing](https://scrapfly.io/docs/proxy-saver/billing)

#### Cloud Browser API

- [Getting Started](https://scrapfly.io/docs/cloud-browser-api/getting-started)
- [Proxy & Geo-Targeting](https://scrapfly.io/docs/cloud-browser-api/proxy)
- [Unblock API](https://scrapfly.io/docs/cloud-browser-api/unblock)
- [File Downloads](https://scrapfly.io/docs/cloud-browser-api/file-downloads)
- [Session Resume](https://scrapfly.io/docs/cloud-browser-api/session-resume)
- [Human-in-the-Loop](https://scrapfly.io/docs/cloud-browser-api/human-in-the-loop)
- [Debug Mode](https://scrapfly.io/docs/cloud-browser-api/debug-mode)
- [Bring Your Own Proxy](https://scrapfly.io/docs/cloud-browser-api/bring-your-own-proxy)
- [Browser Extensions](https://scrapfly.io/docs/cloud-browser-api/extensions)
##### Integrations

- [Puppeteer](https://scrapfly.io/docs/cloud-browser-api/puppeteer)
- [Playwright](https://scrapfly.io/docs/cloud-browser-api/playwright)
- [Selenium](https://scrapfly.io/docs/cloud-browser-api/selenium)
- [Vercel Agent Browser](https://scrapfly.io/docs/cloud-browser-api/agent-browser)
- [Browser Use](https://scrapfly.io/docs/cloud-browser-api/browser-use)
- [Stagehand](https://scrapfly.io/docs/cloud-browser-api/stagehand)
- [Vibium](https://scrapfly.io/docs/cloud-browser-api/vibium)

- [Billing](https://scrapfly.io/docs/cloud-browser-api/billing)
- [Errors](https://scrapfly.io/docs/cloud-browser-api/errors)


### Tools

- [Antibot Detector](https://scrapfly.io/docs/tools/antibot-detector)

### SDK

- [Golang](https://scrapfly.io/docs/sdk/golang)
- [Python](https://scrapfly.io/docs/sdk/python)
- [TypeScript](https://scrapfly.io/docs/sdk/typescript)
- [Scrapy](https://scrapfly.io/docs/sdk/scrapy)

### Integrations

- [Getting Started](https://scrapfly.io/docs/integration/getting-started)
- [LangChain](https://scrapfly.io/docs/integration/langchain)
- [LlamaIndex](https://scrapfly.io/docs/integration/llamaindex)
- [CrewAI](https://scrapfly.io/docs/integration/crewai)
- [Zapier](https://scrapfly.io/docs/integration/zapier)
- [Make](https://scrapfly.io/docs/integration/make)
- [n8n](https://scrapfly.io/docs/integration/n8n)

### Academy

- [Overview](https://scrapfly.io/academy)
- [Web Scraping Overview](https://scrapfly.io/academy/scraping-overview)
- [Tools](https://scrapfly.io/academy/tools-overview)
- [Reverse Engineering](https://scrapfly.io/academy/reverse-engineering)
- [Static Scraping](https://scrapfly.io/academy/static-scraping)
- [HTML Parsing](https://scrapfly.io/academy/html-parsing)
- [Dynamic Scraping](https://scrapfly.io/academy/dynamic-scraping)
- [Hidden API Scraping](https://scrapfly.io/academy/hidden-api-scraping)
- [Headless Browsers](https://scrapfly.io/academy/headless-browsers)
- [Hidden Web Data](https://scrapfly.io/academy/hidden-web-data)
- [JSON Parsing](https://scrapfly.io/academy/json-parsing)
- [Data Processing](https://scrapfly.io/academy/data-processing)
- [Scaling](https://scrapfly.io/academy/scaling)
- [Walkthrough Summary](https://scrapfly.io/academy/walkthrough-summary)
- [Scraper Blocking](https://scrapfly.io/academy/scraper-blocking)
- [Proxies](https://scrapfly.io/academy/proxies)

---

# Human-in-the-Loop

 [  View as markdown ](https://scrapfly.io/?view=markdown)   Copy for LLM    Copy for LLM  [     Open in ChatGPT ](https://chatgpt.com/?hints=search&prompt=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fcloud-browser-api%2Fhuman-in-the-loop%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Claude ](https://claude.ai/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fcloud-browser-api%2Fhuman-in-the-loop%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Perplexity ](https://www.perplexity.ai/search/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fcloud-browser-api%2Fhuman-in-the-loop%20so%20I%20can%20ask%20questions%20about%20it.) 

 

 

 1. [Cloud Browser](https://scrapfly.io/docs/cloud-browser-api/getting-started)
2. Human-in-the-Loop
 
  **Human-in-the-Loop** allows you to take manual control of running Cloud Browser sessions directly from your dashboard. Perfect for debugging failed scrapes, solving CAPTCHAs, or performing manual verification steps.

  **Beta Feature:** Cloud Browser is currently in beta. 

## Requirements

 For Human-in-the-Loop to work, your Cloud Browser session **must** be started with two parameters:

  **Required Parameters:**- `session` — A session ID is **mandatory**. Without it, the browser cannot be found or reconnected to from the dashboard.
- `auto_close=false` — Prevents the browser from being terminated when your script disconnects. Without this, the session is destroyed immediately on disconnect.
 
 

 ```
wss://browser.scrapfly.io?api_key=YOUR_API_KEY&session=my-session-id&auto_close=false
```

 

   

 

 Both parameters are required. If you omit `session`, the browser gets an anonymous run ID and won't appear in the sessions dashboard. If you omit `auto_close=false`, the browser shuts down the moment your script disconnects.

## How It Works

 Cloud Browser sessions have an **attachment type** that indicates who is controlling the browser:

 | Attachment Type | Description | Human-in-Loop Available? |
|---|---|---|
| `None` | No operator attached (idle session) | Yes |
| `ScrapflyAgent` | Automated script or AI agent is controlling | No |
| `HumanAgent` | Human operator has manual control | Already connected |

  **Key Concept:** You can only take manual control of sessions with `attachment_type=None`. This prevents interference with running automations. 

## Accessing Human-in-the-Loop

1. **Navigate to Sessions Dashboard**Go to your Cloud Browser sessions page:
    
     [ View Sessions Dashboard ](https://scrapfly.io/dashboard/cloud-browser/sessions)
2. **Find Available Sessions**Look for sessions with No Attachment status. These are idle sessions available for manual control.
3. **Connect to Session**Click the **"Connect"** button next to the session you want to control.
4. **Control the Browser**A screencast view opens showing the browser in real-time. You can:
    
    
    - Move the mouse cursor
    - Click on elements
    - Type text with your keyboard
    - Scroll the page
    - Navigate to different URLs
 
## Programmatic Session Management

 You can programmatically retrieve and manage running sessions using the Cloud Browser REST API. This is useful for building custom dashboards, automation workflows, or monitoring tools.

### List Running Sessions

 Retrieve all running browser sessions for your account:

 `GET https://browser.scrapfly.io/sessions?key=YOUR_API_KEY` 

 

 ```
import requests

API_KEY = ''

# List all running sessions
response = requests.get(
    f'https://browser.scrapfly.io/sessions?key={API_KEY}'
)
data = response.json()

print(f"Total running sessions: {data['total']}")

for session in data['sessions']:
    print(f"Session: {session['session_id']}")
    print(f"  Status: {session['attached_by'] or 'Available'}")
    print(f"  Runtime: {session['runtime_ms'] / 1000:.1f}s")
    print(f"  Timeout: {session['timeout']}s")
    print(f"  Proxy: {session['proxy_pool']}")
    print()
```

 

   

 

#### Response Format

The API returns a JSON object with the following structure:

 ```
{
  "total": 2,
  "sessions": [
    {
      "session_id": "my-scraper-session-123",
      "run_id": "01HQABCDEF123456789",
      "proxy_pool": "datacenter",
      "country": "US",
      "os": "windows",
      "start_time": "2024-01-15T10:30:00Z",
      "runtime_ms": 45000,
      "timeout": 900,
      "attached_by": "",
      "env": "production",
      "project": "my-project"
    },
    {
      "session_id": "debug-session-456",
      "run_id": "01HQXYZ789012345678",
      "proxy_pool": "residential",
      "country": "DE",
      "os": "macos",
      "start_time": "2024-01-15T10:35:00Z",
      "runtime_ms": 15000,
      "timeout": 1800,
      "attached_by": "scrapfly_agent",
      "attached_at": "2024-01-15T10:35:05Z"
    }
  ]
}
```

 

   

 

 | Field | Type | Description |
|---|---|---|
| `session_id` | string | The session ID you provided when connecting |
| `run_id` | string | Unique run ID for billing and tracking |
| `proxy_pool` | string | `datacenter` or `residential` |
| `country` | string | Proxy country code (e.g., "US", "DE") |
| `os` | string | Browser OS fingerprint (windows, macos, linux) |
| `start_time` | datetime | When the session started (ISO 8601) |
| `runtime_ms` | integer | Current runtime in milliseconds |
| `timeout` | integer | Session timeout in seconds |
| `attached_by` | string | Current operator: `""` (none), `scrapfly_agent`, or `human_agent` |
| `attached_at` | datetime | When the current operator attached (if any) |

### Get Specific Session

 Retrieve details for a specific session by ID:

 `GET https://browser.scrapfly.io/session/{session_id}?key=YOUR_API_KEY` 

 

 ```
import requests

API_KEY = ''
SESSION_ID = 'my-scraper-session-123'

# Get specific session details
response = requests.get(
    f'https://browser.scrapfly.io/session/{SESSION_ID}?key={API_KEY}'
)

if response.status_code == 200:
    session = response.json()
    print(f"Session {SESSION_ID} is running")
    print(f"  Attached by: {session['attached_by'] or 'None (available)'}")
    print(f"  Remaining time: {session['timeout'] - session['runtime_ms']/1000:.0f}s")
elif response.status_code == 404:
    print(f"Session {SESSION_ID} not found or expired")
else:
    print(f"Error: {response.status_code}")
```

 

   

 

### Stop a Session

 Programmatically terminate a running session using its session ID:

 `POST https://browser.scrapfly.io/session/{session_id}/stop?key=YOUR_API_KEY` 

 

 ```
import requests

API_KEY = ''
SESSION_ID = 'my-scraper-session-123'

# Stop a session by its session ID
stop_response = requests.post(
    f'https://browser.scrapfly.io/session/{SESSION_ID}/stop?key={API_KEY}'
)

if stop_response.status_code == 200:
    print(f'Session {SESSION_ID} stopped successfully')
elif stop_response.status_code == 404:
    print(f'Session {SESSION_ID} not found or already stopped')
else:
    print(f'Error: {stop_response.status_code}')
```

 

   

 

### Workflow Example: Wait for Available Session

 Here's a complete example that waits for a session to become available before reconnecting:

 ```
import requests
import time
from playwright.sync_api import sync_playwright

API_KEY = ''
SESSION_ID = 'my-workflow-session'
API_BASE = 'https://browser.scrapfly.io'
WS_ENDPOINT = f'wss://browser.scrapfly.io?api_key={API_KEY}&session={SESSION_ID}&auto_close=false'

def get_session_status(session_id: str) -> dict | None:
    """Get session status from API."""
    response = requests.get(f'{API_BASE}/session/{session_id}?key={API_KEY}')
    if response.status_code == 200:
        return response.json()
    return None

def wait_for_session_available(session_id: str, timeout_seconds: int = 300) -> bool:
    """Wait until session has no attachment (available for HITL)."""
    start = time.time()
    while time.time() - start < timeout_seconds:
        session = get_session_status(session_id)
        if session is None:
            print(f'Session {session_id} not found')
            return False

        attached_by = session.get('attached_by', '')
        if not attached_by:  # Empty string means available
            print(f'Session {session_id} is now available')
            return True

        print(f'Session is attached to: {attached_by}, waiting...')
        time.sleep(5)

    print('Timeout waiting for session')
    return False

# Example workflow
with sync_playwright() as p:
    # Step 1: Start automation
    browser = p.chromium.connect_over_cdp(WS_ENDPOINT)
    page = browser.contexts[0].pages[0] if browser.contexts else browser.new_context().new_page()

    page.goto('https://example.com/checkout')

    # Step 2: Detect manual intervention needed
    if page.query_selector('.captcha-required'):
        print(f'CAPTCHA detected! Session ID: {SESSION_ID}')
        print('Please solve the CAPTCHA in the dashboard...')

        # Disconnect but keep session alive
        browser.close()

        # Step 3: Poll API until human finishes
        if wait_for_session_available(SESSION_ID, timeout_seconds=600):
            # Step 4: Reconnect and continue
            browser = p.chromium.connect_over_cdp(WS_ENDPOINT)
            page = browser.contexts[0].pages[0]

            # Continue automation
            page.click('#submit-order')
            print('Order submitted!')

        browser.close()
```

 

   

 

  **Tip:** Use the `attached_by` field to determine session availability: - `""` (empty string) - Available for Human-in-the-Loop
- `scrapfly_agent` - Script is connected
- `human_agent` - Human operator is connected
 
 

## Common Use Cases

### Debugging Failed Scrapes

 When your automation script encounters an error, leave the session open for manual inspection:

 ```
const puppeteer = require('puppeteer-core');

const API_KEY = '';
const SESSION_ID = 'debug-' + Date.now();

async function scrapeWithDebug() {
    const browser = await puppeteer.connect({
        browserWSEndpoint: `wss://browser.scrapfly.io?api_key=${API_KEY}&session=${SESSION_ID}&auto_close=false`,
    });

    try {
        const page = await browser.newPage();
        await page.goto('https://web-scraping.dev');

        // Your scraping logic
        const data = await page.$eval('.data', el => el.textContent);

        await browser.close();
        return data;
    } catch (error) {
        console.error('Scraping failed:', error.message);
        console.log('Session ID for debugging:', SESSION_ID);
        console.log('Go to dashboard and connect to this session manually');

        // Disconnect but keep session alive for manual inspection
        await browser.disconnect();

        throw error;
    }
}

scrapeWithDebug();
```

 

   

 

After the script fails:

1. Go to the [Sessions Dashboard](https://scrapfly.io/dashboard/cloud-browser/sessions)
2. Find the session by ID (e.g., `debug-1234567890`)
3. Click **"Connect"** to take manual control
4. Investigate what went wrong by inspecting the page state
 
### CAPTCHA Solving

 Pause automation when a CAPTCHA appears, let a human solve it, then resume:

 ```
const puppeteer = require('puppeteer-core');

const SESSION_ID = 'captcha-session';
const BROWSER_WS = `wss://browser.scrapfly.io?api_key=&session=${SESSION_ID}&auto_close=false`;

async function loginWithCaptcha() {
    const browser = await puppeteer.connect({ browserWSEndpoint: BROWSER_WS });
    const page = await browser.newPage();

    await page.goto('https://example.com/login');
    await page.type('#username', 'user@example.com');
    await page.type('#password', 'secret');

    // Check if CAPTCHA appeared
    const hasCaptcha = await page.$('.captcha-widget');

    if (hasCaptcha) {
        console.log('CAPTCHA detected!');
        console.log('Session ID:', SESSION_ID);
        console.log('Please solve the CAPTCHA manually in the dashboard');

        // Disconnect but keep browser alive
        await browser.disconnect();

        // Wait for manual CAPTCHA solving (could also wait for a webhook/signal)
        console.log('Waiting for CAPTCHA to be solved...');
        await new Promise(resolve => setTimeout(resolve, 60000)); // Wait 1 minute

        // Reconnect to same session
        const browser2 = await puppeteer.connect({ browserWSEndpoint: BROWSER_WS });
        const page2 = (await browser2.pages())[0];

        // Continue automation
        await page2.click('#login-btn');
        await page2.waitForNavigation();

        console.log('Login successful!');
        await browser2.close();
    } else {
        await page.click('#login-btn');
        await page.waitForNavigation();
        await browser.close();
    }
}

loginWithCaptcha();
```

 

   

 

  **Note:** This is a basic polling approach. For production use, consider implementing a webhook or notification system to signal when manual intervention is complete. 

### Manual Verification

 For workflows requiring human verification before proceeding:

 ```
async function scrapeWithVerification() {
    const browser = await puppeteer.connect({
        browserWSEndpoint: `wss://browser.scrapfly.io?api_key=&session=verify-session&auto_close=false`,
    });

    const page = await browser.newPage();

    // Step 1: Navigate to form
    await page.goto('https://example.com/form');
    await page.type('#field1', 'Auto-filled value');
    await page.type('#field2', 'Another value');

    console.log('Form pre-filled. Please verify in dashboard before submission.');

    // Disconnect, let human verify
    await browser.disconnect();

    // Wait for human to click "Verified" button in your app/dashboard
    await waitForVerification(); // Your custom verification logic

    // Reconnect and submit
    const browser2 = await puppeteer.connect({
        browserWSEndpoint: `wss://browser.scrapfly.io?api_key=&session=verify-session&auto_close=false`,
    });

    const page2 = (await browser2.pages())[0];
    await page2.click('#submit-btn');
    await page2.waitForNavigation();

    console.log('Form submitted successfully');
    await browser2.close();
}

scrapeWithVerification();
```

 

   

 

## Dashboard Features

### Screencast View

 When connected to a session, you see a live screencast of the browser:

- **Real-time rendering** - See exactly what the browser sees
- **Mouse control** - Click anywhere on the canvas to interact
- **Keyboard input** - Type directly into form fields
- **Scrolling** - Use mouse wheel or trackpad to scroll
 
### Session Information

 The dashboard displays:

- **Session ID** - Unique identifier for reconnection
- **Status** - Active, idle, or terminated
- **Duration** - How long the session has been running
- **Attachment Type** - Current operator (None, ScrapflyAgent, HumanAgent)
- **Bandwidth Usage** - Data consumed by the session
- **Cost** - Current session cost (time + bandwidth)
 
### Session Controls

 While connected, you can:

- **Disconnect** - Release manual control (session continues running)
- **Terminate** - Close the browser and end the session
 
## Attachment Type Lifecycle

 Session attachment types change dynamically based on who is controlling the browser:

 ##### None (Idle)

Session is running but no operator is attached

 Available for Human-in-Loop 

 

 

 ##### ScrapflyAgent

Automated script or AI agent is actively controlling

 Not available 

 

 

 ##### HumanAgent

Human operator has manual control via dashboard

 Already connected 

 

 

 

 **Example lifecycle:**

1. Script connects → `attachment_type=ScrapflyAgent`
2. Script calls `browser.disconnect()` → `attachment_type=None`
3. Human connects via dashboard → `attachment_type=HumanAgent`
4. Human disconnects → `attachment_type=None`
5. Script reconnects → `attachment_type=ScrapflyAgent`
6. Script calls `browser.close()` → Session terminated
 
## Best Practices

### Use Descriptive Session IDs

 Make it easy to identify sessions in the dashboard:

 ```
// GOOD: Descriptive session IDs
const SESSION_ID = `debug-login-${Date.now()}`;
const SESSION_ID = `captcha-checkout-${userId}`;
const SESSION_ID = `verify-form-${orderId}`;

// AVOID: Generic session IDs
const SESSION_ID = 'session1';
const SESSION_ID = Math.random().toString();
```

 

   

 

### Log Session Information

 Always log session IDs when manual intervention may be needed:

 ```
const logger = require('winston'); // or your preferred logger

async function scrape() {
    const sessionId = `scrape-${Date.now()}`;

    try {
        const browser = await puppeteer.connect({
            browserWSEndpoint: buildWSUrl(sessionId)
        });

        logger.info('Browser session started', {
            sessionId,
            dashboardUrl: 'https://scrapfly.io/dashboard/cloud-browser/sessions'
        });

        // ... scraping logic ...

        await browser.close();
    } catch (error) {
        logger.error('Scraping failed - manual inspection available', {
            sessionId,
            error: error.message,
            dashboardUrl: `https://scrapfly.io/dashboard/cloud-browser/sessions?session=${sessionId}`
        });
        throw error;
    }
}
```

 

   

 

### Set Reasonable Timeouts

 When waiting for manual intervention, use timeouts to prevent runaway sessions:

 ```
async function waitForManualIntervention(sessionId, maxWaitMs = 300000) {
    const startTime = Date.now();

    while (Date.now() - startTime < maxWaitMs) {
        // Check if manual intervention is complete (your custom logic)
        const isComplete = await checkInterventionComplete(sessionId);

        if (isComplete) {
            console.log('Manual intervention complete, resuming automation');
            return true;
        }

        await new Promise(resolve => setTimeout(resolve, 5000)); // Poll every 5 seconds
    }

    throw new Error('Manual intervention timeout - session may need to be terminated');
}

async function scrapeWithTimeout() {
    const sessionId = 'timeout-example';

    const browser = await puppeteer.connect({
        browserWSEndpoint: `wss://browser.scrapfly.io?api_key=&session=${sessionId}&auto_close=false&timeout=1800`
    });

    // ... automation detects CAPTCHA ...

    await browser.disconnect();

    try {
        await waitForManualIntervention(sessionId, 600000); // 10 minute max wait
        // Resume automation
    } catch (error) {
        console.error(error.message);
        // Terminate session via API or let it timeout
    }
}
```

 

   

 

## Troubleshooting

#####    Cannot Connect to Session  

 

**Cause:** Session is already attached to an operator (ScrapflyAgent or HumanAgent).

**Solution:**

- Wait for the current operator to disconnect (check attachment type in dashboard)
- If your script is still running, it must call `browser.disconnect()` first
- Ensure the session hasn't timed out or been terminated
 
 

 

 

#####    Screencast Not Updating  

 

**Cause:** Network connectivity issues or browser frozen.

**Solution:**

- Refresh the dashboard page
- Check your internet connection
- If browser is frozen, you may need to terminate the session and start fresh
 
 

 

 

#####    Keyboard Input Not Working  

 

**Cause:** Input element not focused or screencast canvas doesn't have focus.

**Solution:**

- Click on the text field in the screencast to focus it
- Ensure the screencast canvas has focus (click on it)
- Try clicking the element again if keyboard input doesn't register
 
 

 

 

#####    Session Lost After Disconnect  

 

**Cause:** `auto_close=true` (default) was used, terminating the session on disconnect.

**Solution:**

- Always use `auto_close=false` when you need to reconnect later
- See [Session Resume](https://scrapfly.io/docs/cloud-browser-api/session-resume) documentation for details
 
 ```
const BROWSER_WS = `wss://browser.scrapfly.io?api_key=&session=my-session&auto_close=false`;
```

 

   

 

 

 

 

 

## Related Documentation

- [Getting Started](https://scrapfly.io/docs/cloud-browser-api/getting-started) - Introduction to Cloud Browser API
- [Session Resume](https://scrapfly.io/docs/cloud-browser-api/session-resume) - Reconnect to browser sessions programmatically
- [Session Dashboard](https://scrapfly.io/dashboard/cloud-browser/sessions) - View and manage active sessions
- [Billing](https://scrapfly.io/docs/cloud-browser-api/billing) - Understand session costs
- [Error Reference](https://scrapfly.io/docs/cloud-browser-api/errors) - Troubleshoot common errors