# Scrapfly Documentation

## Table of Contents

### Dashboard

- [Intro](https://scrapfly.io/docs)
- [Project](https://scrapfly.io/docs/project)
- [Account](https://scrapfly.io/docs/account)
- [Workspace & Team](https://scrapfly.io/docs/workspace-and-team)
- [Billing](https://scrapfly.io/docs/billing)

### Products

#### MCP Server

- [Getting Started](https://scrapfly.io/docs/mcp/getting-started)
- [Tools & API Spec](https://scrapfly.io/docs/mcp/tools)
- [Authentication](https://scrapfly.io/docs/mcp/authentication)
- [Examples & Use Cases](https://scrapfly.io/docs/mcp/examples)
- [FAQ](https://scrapfly.io/docs/mcp/faq)
##### Integrations

- [Overview](https://scrapfly.io/docs/mcp/integrations)
- [Claude Desktop](https://scrapfly.io/docs/mcp/integrations/claude-desktop)
- [Claude Code](https://scrapfly.io/docs/mcp/integrations/claude-code)
- [ChatGPT](https://scrapfly.io/docs/mcp/integrations/chatgpt)
- [Cursor](https://scrapfly.io/docs/mcp/integrations/cursor)
- [Cline](https://scrapfly.io/docs/mcp/integrations/cline)
- [Windsurf](https://scrapfly.io/docs/mcp/integrations/windsurf)
- [Zed](https://scrapfly.io/docs/mcp/integrations/zed)
- [Roo Code](https://scrapfly.io/docs/mcp/integrations/roo-code)
- [VS Code](https://scrapfly.io/docs/mcp/integrations/vscode)
- [LangChain](https://scrapfly.io/docs/mcp/integrations/langchain)
- [LlamaIndex](https://scrapfly.io/docs/mcp/integrations/llamaindex)
- [CrewAI](https://scrapfly.io/docs/mcp/integrations/crewai)
- [OpenAI](https://scrapfly.io/docs/mcp/integrations/openai)
- [n8n](https://scrapfly.io/docs/mcp/integrations/n8n)
- [Make](https://scrapfly.io/docs/mcp/integrations/make)
- [Zapier](https://scrapfly.io/docs/mcp/integrations/zapier)
- [Vapi AI](https://scrapfly.io/docs/mcp/integrations/vapi)
- [Agent Builder](https://scrapfly.io/docs/mcp/integrations/agent-builder)
- [Custom Client](https://scrapfly.io/docs/mcp/integrations/custom-client)


#### Web Scraping API

- [Getting Started](https://scrapfly.io/docs/scrape-api/getting-started)
- [API Specification](https://scrapfly.io/docs/scrape-api/specification)
- [Monitoring](https://scrapfly.io/docs/monitoring)
- [Customize Request](https://scrapfly.io/docs/scrape-api/custom)
- [Debug](https://scrapfly.io/docs/scrape-api/debug)
- [Anti Scraping Protection](https://scrapfly.io/docs/scrape-api/anti-scraping-protection)
- [Proxy](https://scrapfly.io/docs/scrape-api/proxy)
- [Proxy Mode](https://scrapfly.io/docs/scrape-api/proxy-mode)
- [Proxy Mode - Screaming Frog](https://scrapfly.io/docs/scrape-api/proxy-mode/screaming-frog)
- [Proxy Mode - Apify](https://scrapfly.io/docs/scrape-api/proxy-mode/apify)
- [(Auto) Data Extraction](https://scrapfly.io/docs/scrape-api/extraction)
- [Javascript Rendering](https://scrapfly.io/docs/scrape-api/javascript-rendering)
- [Javascript Scenario](https://scrapfly.io/docs/scrape-api/javascript-scenario)
- [SSL](https://scrapfly.io/docs/scrape-api/ssl)
- [DNS](https://scrapfly.io/docs/scrape-api/dns)
- [Cache](https://scrapfly.io/docs/scrape-api/cache)
- [Session](https://scrapfly.io/docs/scrape-api/session)
- [Webhook](https://scrapfly.io/docs/scrape-api/webhook)
- [Screenshot](https://scrapfly.io/docs/scrape-api/screenshot)
- [Errors](https://scrapfly.io/docs/scrape-api/errors)
- [Timeout](https://scrapfly.io/docs/scrape-api/understand-timeout)
- [Throttling](https://scrapfly.io/docs/throttling)
- [Troubleshoot](https://scrapfly.io/docs/scrape-api/troubleshoot)
- [Billing](https://scrapfly.io/docs/scrape-api/billing)
- [FAQ](https://scrapfly.io/docs/scrape-api/faq)

#### Crawler API

- [Getting Started](https://scrapfly.io/docs/crawler-api/getting-started)
- [API Specification](https://scrapfly.io/docs/crawler-api/specification)
- [Retrieving Results](https://scrapfly.io/docs/crawler-api/results)
- [WARC Format](https://scrapfly.io/docs/crawler-api/warc-format)
- [Data Extraction](https://scrapfly.io/docs/crawler-api/extraction-rules)
- [Webhook](https://scrapfly.io/docs/crawler-api/webhook)
- [Billing](https://scrapfly.io/docs/crawler-api/billing)
- [Errors](https://scrapfly.io/docs/crawler-api/errors)
- [Troubleshoot](https://scrapfly.io/docs/crawler-api/troubleshoot)
- [FAQ](https://scrapfly.io/docs/crawler-api/faq)

#### Screenshot API

- [Getting Started](https://scrapfly.io/docs/screenshot-api/getting-started)
- [API Specification](https://scrapfly.io/docs/screenshot-api/specification)
- [Accessibility Testing](https://scrapfly.io/docs/screenshot-api/accessibility)
- [Webhook](https://scrapfly.io/docs/screenshot-api/webhook)
- [Billing](https://scrapfly.io/docs/screenshot-api/billing)
- [Errors](https://scrapfly.io/docs/screenshot-api/errors)

#### Extraction API

- [Getting Started](https://scrapfly.io/docs/extraction-api/getting-started)
- [API Specification](https://scrapfly.io/docs/extraction-api/specification)
- [Rules Template](https://scrapfly.io/docs/extraction-api/rules-and-template)
- [LLM Extraction](https://scrapfly.io/docs/extraction-api/llm-prompt)
- [AI Auto Extraction](https://scrapfly.io/docs/extraction-api/automatic-ai)
- [Webhook](https://scrapfly.io/docs/extraction-api/webhook)
- [Billing](https://scrapfly.io/docs/extraction-api/billing)
- [Errors](https://scrapfly.io/docs/extraction-api/errors)
- [FAQ](https://scrapfly.io/docs/extraction-api/faq)

#### Proxy Saver

- [Getting Started](https://scrapfly.io/docs/proxy-saver/getting-started)
- [Fingerprints](https://scrapfly.io/docs/proxy-saver/fingerprints)
- [Optimizations](https://scrapfly.io/docs/proxy-saver/optimizations)
- [SSL Certificates](https://scrapfly.io/docs/proxy-saver/certificates)
- [Protocols](https://scrapfly.io/docs/proxy-saver/protocols)
- [Pacfile](https://scrapfly.io/docs/proxy-saver/pacfile)
- [Secure Credentials](https://scrapfly.io/docs/proxy-saver/security)
- [Billing](https://scrapfly.io/docs/proxy-saver/billing)

#### Cloud Browser API

- [Getting Started](https://scrapfly.io/docs/cloud-browser-api/getting-started)
- [Proxy & Geo-Targeting](https://scrapfly.io/docs/cloud-browser-api/proxy)
- [Unblock API](https://scrapfly.io/docs/cloud-browser-api/unblock)
- [File Downloads](https://scrapfly.io/docs/cloud-browser-api/file-downloads)
- [Session Resume](https://scrapfly.io/docs/cloud-browser-api/session-resume)
- [Human-in-the-Loop](https://scrapfly.io/docs/cloud-browser-api/human-in-the-loop)
- [Debug Mode](https://scrapfly.io/docs/cloud-browser-api/debug-mode)
- [Bring Your Own Proxy](https://scrapfly.io/docs/cloud-browser-api/bring-your-own-proxy)
- [Browser Extensions](https://scrapfly.io/docs/cloud-browser-api/extensions)
##### Integrations

- [Puppeteer](https://scrapfly.io/docs/cloud-browser-api/puppeteer)
- [Playwright](https://scrapfly.io/docs/cloud-browser-api/playwright)
- [Selenium](https://scrapfly.io/docs/cloud-browser-api/selenium)
- [Vercel Agent Browser](https://scrapfly.io/docs/cloud-browser-api/agent-browser)
- [Browser Use](https://scrapfly.io/docs/cloud-browser-api/browser-use)
- [Stagehand](https://scrapfly.io/docs/cloud-browser-api/stagehand)
- [Vibium](https://scrapfly.io/docs/cloud-browser-api/vibium)

- [Billing](https://scrapfly.io/docs/cloud-browser-api/billing)
- [Errors](https://scrapfly.io/docs/cloud-browser-api/errors)


### Tools

- [Antibot Detector](https://scrapfly.io/docs/tools/antibot-detector)

### SDK

- [Golang](https://scrapfly.io/docs/sdk/golang)
- [Python](https://scrapfly.io/docs/sdk/python)
- [TypeScript](https://scrapfly.io/docs/sdk/typescript)
- [Scrapy](https://scrapfly.io/docs/sdk/scrapy)

### Integrations

- [Getting Started](https://scrapfly.io/docs/integration/getting-started)
- [LangChain](https://scrapfly.io/docs/integration/langchain)
- [LlamaIndex](https://scrapfly.io/docs/integration/llamaindex)
- [CrewAI](https://scrapfly.io/docs/integration/crewai)
- [Zapier](https://scrapfly.io/docs/integration/zapier)
- [Make](https://scrapfly.io/docs/integration/make)
- [n8n](https://scrapfly.io/docs/integration/n8n)

### Academy

- [Overview](https://scrapfly.io/academy)
- [Web Scraping Overview](https://scrapfly.io/academy/scraping-overview)
- [Tools](https://scrapfly.io/academy/tools-overview)
- [Reverse Engineering](https://scrapfly.io/academy/reverse-engineering)
- [Static Scraping](https://scrapfly.io/academy/static-scraping)
- [HTML Parsing](https://scrapfly.io/academy/html-parsing)
- [Dynamic Scraping](https://scrapfly.io/academy/dynamic-scraping)
- [Hidden API Scraping](https://scrapfly.io/academy/hidden-api-scraping)
- [Headless Browsers](https://scrapfly.io/academy/headless-browsers)
- [Hidden Web Data](https://scrapfly.io/academy/hidden-web-data)
- [JSON Parsing](https://scrapfly.io/academy/json-parsing)
- [Data Processing](https://scrapfly.io/academy/data-processing)
- [Scaling](https://scrapfly.io/academy/scaling)
- [Walkthrough Summary](https://scrapfly.io/academy/walkthrough-summary)
- [Scraper Blocking](https://scrapfly.io/academy/scraper-blocking)
- [Proxies](https://scrapfly.io/academy/proxies)

---

# Debug Mode (Session Recording)

 [  View as markdown ](https://scrapfly.io/?view=markdown)   Copy for LLM    Copy for LLM  [     Open in ChatGPT ](https://chatgpt.com/?hints=search&prompt=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fcloud-browser-api%2Fdebug-mode%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Claude ](https://claude.ai/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fcloud-browser-api%2Fdebug-mode%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Perplexity ](https://www.perplexity.ai/search/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fcloud-browser-api%2Fdebug-mode%20so%20I%20can%20ask%20questions%20about%20it.) 

 

 

 1. [Cloud Browser](https://scrapfly.io/docs/cloud-browser-api/getting-started)
2. Debug Mode
 
  **Debug Mode** records your Cloud Browser sessions as videos, allowing you to replay and analyze browser behavior after the fact. Perfect for debugging failed scrapes, understanding page interactions, and validating automation logic.

## How It Works

 When you connect to Cloud Browser with `debug=true`, the session's screencast is recorded as a video:

1. Connect with the `debug=true` parameter in the WebSocket URL
2. All browser interactions are recorded in real-time as video frames
3. When the session ends, the video is uploaded to Google Cloud Storage (GCS)
4. A replay URL is made available in session metadata and annotations
5. Videos are automatically deleted when the session log expires (based on your plan's retention period)
 
  **Storage:** Session recordings are stored securely and automatically cleaned up according to your account's log retention policy. See [Monitoring Documentation](https://scrapfly.io/docs/monitoring#log-retention) for retention details. 

## Enabling Debug Mode

 Add `debug=true` to your Cloud Browser WebSocket URL:

 ```
wss://browser.scrapfly.io?api_key=&debug=true
```

 

   

 

### Puppeteer Example

 ```
const puppeteer = require('puppeteer-core');

const API_KEY = '';
const BROWSER_WS = `wss://browser.scrapfly.io?api_key=${API_KEY}&debug=true&session=debug-session`;

async function scrapeWithRecording() {
    const browser = await puppeteer.connect({
        browserWSEndpoint: BROWSER_WS,
    });

    const page = await browser.newPage();

    try {
        await page.goto('https://web-scraping.dev');
        await page.click('.some-button');
        await page.waitForSelector('.result');

        const result = await page.$eval('.result', el => el.textContent);
        console.log('Result:', result);

        await browser.close();
        console.log('Session recorded! Check dashboard for video replay link.');
    } catch (error) {
        console.error('Error occurred:', error.message);
        await browser.close();
        console.log('Session video will show what happened before the error.');
    }
}

scrapeWithRecording();
```

 

   

 

### Playwright Example

 ```
const { chromium } = require('playwright');

const API_KEY = '';
const BROWSER_WS = `wss://browser.scrapfly.io?api_key=${API_KEY}&debug=true&session=playwright-debug`;

async function scrapeWithRecording() {
    const browser = await chromium.connectOverCDP(BROWSER_WS);
    const context = browser.contexts()[0];
    const page = await context.newPage();

    try {
        await page.goto('https://web-scraping.dev');
        await page.click('.navigation-link');
        await page.waitForLoadState('networkidle');

        const data = await page.textContent('.content');
        console.log('Data:', data);

        await browser.close();
    } catch (error) {
        console.error('Error:', error.message);
        await browser.close();
        console.log('Session recorded - check dashboard for replay');
    }
}

scrapeWithRecording();
```

 

   

 

## Accessing Session Recordings

 Once a session with `debug=true` ends, the recording is available in multiple places:

### Via Dashboard

1. **Navigate to Sessions Dashboard**Go to your Cloud Browser sessions:
    
     [ View Sessions Dashboard ](https://scrapfly.io/dashboard/cloud-browser/sessions)
2. **Find Your Session**Locate the session by ID or timestamp.
3. **Click "Replay" or "View Recording"**A video player will open showing the full session screencast.
 
### Via API

 Retrieve the playback info and a signed video URL for a debug recording using the playback endpoint:

 `GET https://browser.scrapfly.io/run/{'{'}run_id{'}'}/playback?key=YOUR_API_KEY` 

 

  **Authentication:** Uses the same API key as your WebSocket connection. Pass it as a `key` query parameter or via the `Authorization: Bearer` header. 

 The `run_id` is a unique identifier for each browser session. You can obtain it from:

- The **Sessions Dashboard** — visible in the session details
- The [running sessions API](https://scrapfly.io/docs/cloud-browser-api/human-in-the-loop#list-sessions) endpoint
 
#### Response (Recording Available)

 ```
{
  "run_id": "01HQABCDEF123456789",
  "available": true,
  "metadata": {
    "run_id": "01HQABCDEF123456789",
    "frame_count": 450,
    "video_file": "recording.webm",
    "width": 1920,
    "height": 1080,
    "start_time": "2026-02-12T10:00:00Z",
    "end_time": "2026-02-12T10:01:30Z",
    "duration_ms": 90000
  },
  "video_url": "https://storage.googleapis.com/...signed-url-valid-1-hour..."
}
```

 

   

 

#### Response (Recording Not Available)

 ```
{
  "run_id": "01HQABCDEF123456789",
  "available": false,
  "error": "Recording not found or not yet available"
}
```

 

   

 

#### Response Fields

 | Field | Type | Description |
|---|---|---|
| `run_id` | string | The unique run identifier |
| `available` | boolean | `true` if the recording is ready for playback |
| `metadata` | object | Recording metadata (only present when `available` is `true`) |
| `metadata.frame_count` | integer | Number of video frames captured |
| `metadata.width` / `height` | integer | Video resolution in pixels |
| `metadata.duration_ms` | integer | Recording duration in milliseconds |
| `metadata.start_time` / `end_time` | string | ISO 8601 timestamps for the recording window |
| `video_url` | string | Signed URL to the WebM video file (valid for 1 hour, refreshable) |
| `error` | string | Error message (only present when `available` is `false`) |

  **Signed URL:** The `video_url` is a signed Google Cloud Storage URL valid for **1 hour**. If the URL expires, simply call the playback endpoint again to get a fresh URL. The video itself is available until the session log expires per your plan's retention policy. 

## Retrieving Recordings via API

 After a debug session ends, you can programmatically retrieve the playback info and download the video recording. The recording typically becomes available within a few seconds of the session ending.

    cURL    Python    Node.js    Full Workflow  

 Get playback info and download the video:

 ```
# Get playback info for a debug recording
curl -s 'https://browser.scrapfly.io/run/YOUR_RUN_ID/playback?key=' | python3 -m json.tool

# Download the video file using the signed URL from the response
curl -s 'https://browser.scrapfly.io/run/YOUR_RUN_ID/playback?key=' \
  | python3 -c "import sys,json; print(json.load(sys.stdin)['video_url'])" \
  | xargs curl -o recording.webm
```

 

   

 

 

 ```
import requests

API_KEY = ''
RUN_ID = 'YOUR_RUN_ID'  # From sessions dashboard or running sessions API

# Step 1: Get playback info
response = requests.get(
    f'https://browser.scrapfly.io/run/{RUN_ID}/playback',
    params={'key': API_KEY}
)
playback = response.json()

if playback['available']:
    print(f"Duration: {playback['metadata']['duration_ms']}ms")
    print(f"Resolution: {playback['metadata']['width']}x{playback['metadata']['height']}")

    # Step 2: Download the video
    video = requests.get(playback['video_url'])
    with open('recording.webm', 'wb') as f:
        f.write(video.content)
    print(f'Saved recording.webm ({len(video.content)} bytes)')
else:
    print(f"Not available: {playback.get('error')}")
```

 

   

 

 

 ```
const fs = require('fs');

const API_KEY = '';
const RUN_ID = 'YOUR_RUN_ID'; // From sessions dashboard or running sessions API

async function downloadRecording() {
    // Step 1: Get playback info
    const response = await fetch(
        `https://browser.scrapfly.io/run/${RUN_ID}/playback?key=${API_KEY}`
    );
    const playback = await response.json();

    if (playback.available) {
        console.log(`Duration: ${playback.metadata.duration_ms}ms`);
        console.log(`Resolution: ${playback.metadata.width}x${playback.metadata.height}`);

        // Step 2: Download the video
        const videoResponse = await fetch(playback.video_url);
        const buffer = Buffer.from(await videoResponse.arrayBuffer());
        fs.writeFileSync('recording.webm', buffer);
        console.log(`Saved recording.webm (${buffer.length} bytes)`);
    } else {
        console.log(`Not available: ${playback.error}`);
    }
}

downloadRecording();
```

 

   

 

 

Complete end-to-end: connect with debug mode, scrape, then download the recording.

 ```
const puppeteer = require('puppeteer-core');
const fs = require('fs');

const API_KEY = '';
const BROWSER_WS = `wss://browser.scrapfly.io?api_key=${API_KEY}&debug=true&session=my-debug-session`;

async function scrapeAndDownloadRecording() {
    // Step 1: Connect with debug mode and scrape
    const browser = await puppeteer.connect({ browserWSEndpoint: BROWSER_WS });
    const page = await browser.newPage();

    await page.goto('https://web-scraping.dev');
    const title = await page.title();
    console.log('Scraped title:', title);

    await browser.close();
    console.log('Session ended, waiting for recording...');

    // Step 2: Get the run_id from the sessions list
    // (you can also capture it from the session dashboard)
    const sessionsRes = await fetch(
        `https://browser.scrapfly.io/sessions?key=${API_KEY}`
    );
    const { sessions } = await sessionsRes.json();
    const runId = sessions[0]?.run_id;

    if (!runId) {
        console.log('No session found');
        return;
    }

    // Step 3: Poll until recording is available
    for (let i = 0; i < 10; i++) {
        await new Promise(r => setTimeout(r, 3000));

        const res = await fetch(
            `https://browser.scrapfly.io/run/${runId}/playback?key=${API_KEY}`
        );
        const playback = await res.json();

        if (playback.available) {
            // Step 4: Download the video
            const video = await fetch(playback.video_url);
            const buffer = Buffer.from(await video.arrayBuffer());
            fs.writeFileSync('debug-recording.webm', buffer);
            console.log(`Saved debug-recording.webm (${buffer.length} bytes)`);
            console.log(`Duration: ${playback.metadata.duration_ms}ms`);
            return;
        }
        console.log(`Attempt ${i + 1}: not yet available, retrying...`);
    }
    console.log('Recording not available after retries');
}

scrapeAndDownloadRecording();
```

 

   

 

 

 

  **Timing:** Recordings are uploaded after the session ends and may take a few seconds to become available. If the API returns `"available": false`, wait a few seconds and try again. 

## Common Use Cases

### Debugging Failed Scrapes

 When automation fails, the video shows exactly what happened:

 ```
const SESSION_ID = `debug-failure-${Date.now()}`;

async function debugFailedScrape() {
    const browser = await puppeteer.connect({
        browserWSEndpoint: `wss://browser.scrapfly.io?api_key=&debug=true&session=${SESSION_ID}&auto_close=false`,
    });

    const page = await browser.newPage();

    try {
        await page.goto('https://web-scraping.dev');

        // Attempt to click an element that might not exist
        await page.click('.dynamic-element', { timeout: 5000 });

        await browser.close();
    } catch (error) {
        console.error('Failed:', error.message);
        console.log('Session ID:', SESSION_ID);
        console.log('Watch the recording to see what went wrong!');

        // Keep session alive for Human-in-Loop if needed
        await browser.disconnect();
    }
}

debugFailedScrape();
```

 

   

 

### Validating Page Behavior

 Understand how pages respond to your automation:

 ```
async function validatePageBehavior() {
    const browser = await puppeteer.connect({
        browserWSEndpoint: `wss://browser.scrapfly.io?api_key=&debug=true`,
    });

    const page = await browser.newPage();

    // Test different interaction patterns
    await page.goto('https://web-scraping.dev');

    console.log('Testing hover behavior...');
    await page.hover('.menu-item');
    await page.waitForTimeout(1000);

    console.log('Testing click behavior...');
    await page.click('.submenu-toggle');
    await page.waitForTimeout(1000);

    console.log('Testing form input...');
    await page.type('#search-input', 'test query');
    await page.waitForTimeout(1000);

    await browser.close();

    console.log('Recording available - review to validate all interactions');
}

validatePageBehavior();
```

 

   

 

### Understanding Timing Issues

 Diagnose race conditions and timing problems by replaying the exact sequence of events:

 ```
async function diagnoseTimingIssue() {
    const browser = await puppeteer.connect({
        browserWSEndpoint: `wss://browser.scrapfly.io?api_key=&debug=true`,
    });

    const page = await browser.newPage();

    await page.goto('https://web-scraping.dev');

    // Click button that triggers async loading
    await page.click('#load-data-btn');

    try {
        // Wait for element that might load slowly
        await page.waitForSelector('.data-loaded', { timeout: 3000 });
        console.log('Data loaded successfully');
    } catch (error) {
        console.log('Timeout waiting for data');
        console.log('Video will show how long the page actually took');
    }

    await browser.close();
}

diagnoseTimingIssue();
```

 

   

 

### QA and Testing

 Create a visual record of test runs for quality assurance:

 ```
async function runQATest(testName) {
    const sessionId = `qa-${testName}-${Date.now()}`;

    const browser = await puppeteer.connect({
        browserWSEndpoint: `wss://browser.scrapfly.io?api_key=&debug=true&session=${sessionId}`,
    });

    const page = await browser.newPage();

    console.log(`Running QA test: ${testName}`);
    console.log(`Session ID: ${sessionId}`);

    try {
        // Run test steps
        await page.goto('https://web-scraping.dev');
        await page.click('#test-button');
        await page.waitForSelector('.test-result');

        const result = await page.$eval('.test-result', el => el.textContent);

        console.log(`Test result: ${result}`);
        console.log(`✓ Test passed - recording available for review`);

        await browser.close();
        return true;
    } catch (error) {
        console.error(`✗ Test failed: ${error.message}`);
        console.log(`Recording available at session: ${sessionId}`);

        await browser.close();
        return false;
    }
}

// Run multiple tests with recordings
(async () => {
    await runQATest('login-flow');
    await runQATest('checkout-process');
    await runQATest('search-functionality');
})();
```

 

   

 

## Recording Details

### Video Format

 | Property | Value |
|---|---|
| **Format** | WebM (VP9 codec) |
| **Resolution** | Matches browser viewport (typically 1920x1080) |
| **Frame Rate** | 5 FPS |
| **Storage** | Google Cloud Storage (GCS) |

### Retention Period

 Session recordings are retained according to your account's log retention policy:

  **Retention:** Recordings are automatically deleted when the associated session log expires. Retention periods vary by plan. See [Monitoring Documentation](https://scrapfly.io/docs/monitoring#log-retention) for details. 

### Storage &amp; Performance

 Debug mode has minimal performance impact:

- **Overhead:** Negligible latency added to browser operations
- **Bandwidth:** Video upload happens after session ends (no impact on scraping)
- **File Size:** Typically 1-5 MB per minute of recording (varies by page complexity)
 
## Best Practices

### Use Debug Mode Selectively

 Enable debug mode only when needed to avoid unnecessary storage costs:

 ```
// Development/testing: Enable debug mode
const DEBUG_MODE = process.env.NODE_ENV === 'development';
const BROWSER_WS = `wss://browser.scrapfly.io?api_key=${API_KEY}&debug=${DEBUG_MODE}`;

// Or conditionally enable on errors
async function scrapeWithConditionalDebug(url) {
    let debugEnabled = false;

    try {
        // Try without debug first
        const browser = await puppeteer.connect({
            browserWSEndpoint: `wss://browser.scrapfly.io?api_key=${API_KEY}`,
        });
        // ... scraping logic ...
        await browser.close();
    } catch (error) {
        console.log('Scraping failed, retrying with debug mode...');
        debugEnabled = true;

        // Retry with debug enabled
        const browser = await puppeteer.connect({
            browserWSEndpoint: `wss://browser.scrapfly.io?api_key=${API_KEY}&debug=true`,
        });
        // ... scraping logic ...
        await browser.close();
    }
}

scrapeWithConditionalDebug('https://web-scraping.dev');
```

 

   

 

### Use Descriptive Session IDs

 Make recordings easy to find and identify:

 ```
// GOOD: Descriptive session IDs
const sessionId = `debug-checkout-flow-${Date.now()}`;
const sessionId = `qa-login-test-${testRunId}`;
const sessionId = `reproduce-issue-123`;

// AVOID: Generic session IDs
const sessionId = 'debug';
const sessionId = Math.random().toString();
```

 

   

 

### Combine with Other Features

 Debug mode works seamlessly with other Cloud Browser features:

 ```
// Debug mode + Session Resume + Human-in-the-Loop
const browser = await puppeteer.connect({
    browserWSEndpoint: `wss://browser.scrapfly.io?api_key=&debug=true&session=debug-session&auto_close=false`,
});

try {
    // ... automation logic ...
} catch (error) {
    console.log('Error occurred - session recorded and available for:');
    console.log('1. Video replay (debug mode)');
    console.log('2. Manual inspection (Human-in-the-Loop)');
    console.log('3. Reconnection (Session Resume)');

    await browser.disconnect(); // Keep session alive
}

// Later: Reconnect to same session or take manual control
// Recording will show the entire history
```

 

   

 

## Troubleshooting

#####    Recording Not Available  

 

**Cause:** Session may still be processing, or `debug=true` was not set.

**Solution:**

- Wait a few moments for video upload and processing to complete
- Verify `debug=true` was in the WebSocket URL
- Check dashboard for recording status
- Ensure session completed successfully (not terminated mid-upload)
 
 

 

 

#####    Video Playback Issues  

 

**Cause:** Browser codec compatibility or network issues.

**Solution:**

- Ensure your browser supports WebM/VP9 (Chrome, Firefox, Edge recommended)
- Try downloading the video file instead of streaming
- Check your internet connection
- Clear browser cache and retry
 
 

 

 

#####    Recording Deleted Too Soon  

 

**Cause:** Recordings are tied to session log retention policy.

**Solution:**

- Download important recordings immediately after session completion
- Review your plan's log retention period at [Monitoring Documentation](https://scrapfly.io/docs/monitoring#log-retention)
- Consider upgrading your plan for longer retention if needed
 
 **Tip:** Export critical recordings to your own storage for permanent archiving.

 

 

 

 

## Related Documentation

- [Getting Started](https://scrapfly.io/docs/cloud-browser-api/getting-started) - Introduction to Cloud Browser API
- [Session Resume](https://scrapfly.io/docs/cloud-browser-api/session-resume) - Reconnect to browser sessions
- [Human-in-the-Loop](https://scrapfly.io/docs/cloud-browser-api/human-in-the-loop) - Manual browser control for debugging
- [Session Dashboard](https://scrapfly.io/dashboard/cloud-browser/sessions) - View sessions and recordings
- [Log Retention](https://scrapfly.io/docs/monitoring#log-retention) - Understand retention policies
- [Error Reference](https://scrapfly.io/docs/cloud-browser-api/errors) - Troubleshoot common errors