# Scrapfly Documentation

## Table of Contents

### Dashboard

- [Intro](https://scrapfly.io/docs)
- [Project](https://scrapfly.io/docs/project)
- [Account](https://scrapfly.io/docs/account)
- [Workspace & Team](https://scrapfly.io/docs/workspace-and-team)
- [Billing](https://scrapfly.io/docs/billing)

### Products

#### MCP Server

- [Getting Started](https://scrapfly.io/docs/mcp/getting-started)
- [Tools & API Spec](https://scrapfly.io/docs/mcp/tools)
- [Authentication](https://scrapfly.io/docs/mcp/authentication)
- [Examples & Use Cases](https://scrapfly.io/docs/mcp/examples)
- [FAQ](https://scrapfly.io/docs/mcp/faq)
##### Integrations

- [Overview](https://scrapfly.io/docs/mcp/integrations)
- [Claude Desktop](https://scrapfly.io/docs/mcp/integrations/claude-desktop)
- [Claude Code](https://scrapfly.io/docs/mcp/integrations/claude-code)
- [ChatGPT](https://scrapfly.io/docs/mcp/integrations/chatgpt)
- [Cursor](https://scrapfly.io/docs/mcp/integrations/cursor)
- [Cline](https://scrapfly.io/docs/mcp/integrations/cline)
- [Windsurf](https://scrapfly.io/docs/mcp/integrations/windsurf)
- [Zed](https://scrapfly.io/docs/mcp/integrations/zed)
- [Roo Code](https://scrapfly.io/docs/mcp/integrations/roo-code)
- [VS Code](https://scrapfly.io/docs/mcp/integrations/vscode)
- [LangChain](https://scrapfly.io/docs/mcp/integrations/langchain)
- [LlamaIndex](https://scrapfly.io/docs/mcp/integrations/llamaindex)
- [CrewAI](https://scrapfly.io/docs/mcp/integrations/crewai)
- [OpenAI](https://scrapfly.io/docs/mcp/integrations/openai)
- [n8n](https://scrapfly.io/docs/mcp/integrations/n8n)
- [Make](https://scrapfly.io/docs/mcp/integrations/make)
- [Zapier](https://scrapfly.io/docs/mcp/integrations/zapier)
- [Vapi AI](https://scrapfly.io/docs/mcp/integrations/vapi)
- [Agent Builder](https://scrapfly.io/docs/mcp/integrations/agent-builder)
- [Custom Client](https://scrapfly.io/docs/mcp/integrations/custom-client)


#### Web Scraping API

- [Getting Started](https://scrapfly.io/docs/scrape-api/getting-started)
- [API Specification]()
- [Monitoring](https://scrapfly.io/docs/monitoring)
- [Customize Request](https://scrapfly.io/docs/scrape-api/custom)
- [Debug](https://scrapfly.io/docs/scrape-api/debug)
- [Anti Scraping Protection](https://scrapfly.io/docs/scrape-api/anti-scraping-protection)
- [Proxy](https://scrapfly.io/docs/scrape-api/proxy)
- [Proxy Mode](https://scrapfly.io/docs/scrape-api/proxy-mode)
- [Proxy Mode - Screaming Frog](https://scrapfly.io/docs/scrape-api/proxy-mode/screaming-frog)
- [Proxy Mode - Apify](https://scrapfly.io/docs/scrape-api/proxy-mode/apify)
- [(Auto) Data Extraction](https://scrapfly.io/docs/scrape-api/extraction)
- [Javascript Rendering](https://scrapfly.io/docs/scrape-api/javascript-rendering)
- [Javascript Scenario](https://scrapfly.io/docs/scrape-api/javascript-scenario)
- [SSL](https://scrapfly.io/docs/scrape-api/ssl)
- [DNS](https://scrapfly.io/docs/scrape-api/dns)
- [Cache](https://scrapfly.io/docs/scrape-api/cache)
- [Session](https://scrapfly.io/docs/scrape-api/session)
- [Webhook](https://scrapfly.io/docs/scrape-api/webhook)
- [Screenshot](https://scrapfly.io/docs/scrape-api/screenshot)
- [Errors](https://scrapfly.io/docs/scrape-api/errors)
- [Timeout](https://scrapfly.io/docs/scrape-api/understand-timeout)
- [Throttling](https://scrapfly.io/docs/throttling)
- [Troubleshoot](https://scrapfly.io/docs/scrape-api/troubleshoot)
- [Billing](https://scrapfly.io/docs/scrape-api/billing)
- [FAQ](https://scrapfly.io/docs/scrape-api/faq)

#### Crawler API

- [Getting Started](https://scrapfly.io/docs/crawler-api/getting-started)
- [API Specification]()
- [Retrieving Results](https://scrapfly.io/docs/crawler-api/results)
- [WARC Format](https://scrapfly.io/docs/crawler-api/warc-format)
- [Data Extraction](https://scrapfly.io/docs/crawler-api/extraction-rules)
- [Webhook](https://scrapfly.io/docs/crawler-api/webhook)
- [Billing](https://scrapfly.io/docs/crawler-api/billing)
- [Errors](https://scrapfly.io/docs/crawler-api/errors)
- [Troubleshoot](https://scrapfly.io/docs/crawler-api/troubleshoot)
- [FAQ](https://scrapfly.io/docs/crawler-api/faq)

#### Screenshot API

- [Getting Started](https://scrapfly.io/docs/screenshot-api/getting-started)
- [API Specification]()
- [Accessibility Testing](https://scrapfly.io/docs/screenshot-api/accessibility)
- [Webhook](https://scrapfly.io/docs/screenshot-api/webhook)
- [Billing](https://scrapfly.io/docs/screenshot-api/billing)
- [Errors](https://scrapfly.io/docs/screenshot-api/errors)

#### Extraction API

- [Getting Started](https://scrapfly.io/docs/extraction-api/getting-started)
- [API Specification]()
- [Rules Template](https://scrapfly.io/docs/extraction-api/rules-and-template)
- [LLM Extraction](https://scrapfly.io/docs/extraction-api/llm-prompt)
- [AI Auto Extraction](https://scrapfly.io/docs/extraction-api/automatic-ai)
- [Webhook](https://scrapfly.io/docs/extraction-api/webhook)
- [Billing](https://scrapfly.io/docs/extraction-api/billing)
- [Errors](https://scrapfly.io/docs/extraction-api/errors)
- [FAQ](https://scrapfly.io/docs/extraction-api/faq)

#### Proxy Saver

- [Getting Started](https://scrapfly.io/docs/proxy-saver/getting-started)
- [Fingerprints](https://scrapfly.io/docs/proxy-saver/fingerprints)
- [Optimizations](https://scrapfly.io/docs/proxy-saver/optimizations)
- [SSL Certificates](https://scrapfly.io/docs/proxy-saver/certificates)
- [Protocols](https://scrapfly.io/docs/proxy-saver/protocols)
- [Pacfile](https://scrapfly.io/docs/proxy-saver/pacfile)
- [Secure Credentials](https://scrapfly.io/docs/proxy-saver/security)
- [Billing](https://scrapfly.io/docs/proxy-saver/billing)

#### Cloud Browser API

- [Getting Started](https://scrapfly.io/docs/cloud-browser-api/getting-started)
- [Proxy & Geo-Targeting](https://scrapfly.io/docs/cloud-browser-api/proxy)
- [Unblock API](https://scrapfly.io/docs/cloud-browser-api/unblock)
- [File Downloads](https://scrapfly.io/docs/cloud-browser-api/file-downloads)
- [Session Resume](https://scrapfly.io/docs/cloud-browser-api/session-resume)
- [Human-in-the-Loop](https://scrapfly.io/docs/cloud-browser-api/human-in-the-loop)
- [Debug Mode](https://scrapfly.io/docs/cloud-browser-api/debug-mode)
- [Bring Your Own Proxy](https://scrapfly.io/docs/cloud-browser-api/bring-your-own-proxy)
- [Browser Extensions](https://scrapfly.io/docs/cloud-browser-api/extensions)
##### Integrations

- [Puppeteer](https://scrapfly.io/docs/cloud-browser-api/puppeteer)
- [Playwright](https://scrapfly.io/docs/cloud-browser-api/playwright)
- [Selenium](https://scrapfly.io/docs/cloud-browser-api/selenium)
- [Vercel Agent Browser](https://scrapfly.io/docs/cloud-browser-api/agent-browser)
- [Browser Use](https://scrapfly.io/docs/cloud-browser-api/browser-use)
- [Stagehand](https://scrapfly.io/docs/cloud-browser-api/stagehand)

- [Billing](https://scrapfly.io/docs/cloud-browser-api/billing)
- [Errors](https://scrapfly.io/docs/cloud-browser-api/errors)


### Tools

- [Antibot Detector](https://scrapfly.io/docs/tools/antibot-detector)

### SDK

- [Golang](https://scrapfly.io/docs/sdk/golang)
- [Python](https://scrapfly.io/docs/sdk/python)
- [Rust](https://scrapfly.io/docs/sdk/rust)
- [TypeScript](https://scrapfly.io/docs/sdk/typescript)
- [Scrapy](https://scrapfly.io/docs/sdk/scrapy)

### Integrations

- [Getting Started](https://scrapfly.io/docs/integration/getting-started)
- [LangChain](https://scrapfly.io/docs/integration/langchain)
- [LlamaIndex](https://scrapfly.io/docs/integration/llamaindex)
- [CrewAI](https://scrapfly.io/docs/integration/crewai)
- [Zapier](https://scrapfly.io/docs/integration/zapier)
- [Make](https://scrapfly.io/docs/integration/make)
- [n8n](https://scrapfly.io/docs/integration/n8n)

### Academy

- [Overview](https://scrapfly.io/academy)
- [Web Scraping Overview](https://scrapfly.io/academy/scraping-overview)
- [Tools](https://scrapfly.io/academy/tools-overview)
- [Reverse Engineering](https://scrapfly.io/academy/reverse-engineering)
- [Static Scraping](https://scrapfly.io/academy/static-scraping)
- [HTML Parsing](https://scrapfly.io/academy/html-parsing)
- [Dynamic Scraping](https://scrapfly.io/academy/dynamic-scraping)
- [Hidden API Scraping](https://scrapfly.io/academy/hidden-api-scraping)
- [Headless Browsers](https://scrapfly.io/academy/headless-browsers)
- [Hidden Web Data](https://scrapfly.io/academy/hidden-web-data)
- [JSON Parsing](https://scrapfly.io/academy/json-parsing)
- [Data Processing](https://scrapfly.io/academy/data-processing)
- [Scaling](https://scrapfly.io/academy/scaling)
- [Walkthrough Summary](https://scrapfly.io/academy/walkthrough-summary)
- [Scraper Blocking](https://scrapfly.io/academy/scraper-blocking)
- [Proxies](https://scrapfly.io/academy/proxies)

---

# Cloud Browser File Downloads

 [  View as markdown ](https://scrapfly.io/?view=markdown)   Copy for LLM    Copy for LLM  [     Open in ChatGPT ](https://chatgpt.com/?hints=search&prompt=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fcloud-browser-api%2Ffile-downloads%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Claude ](https://claude.ai/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fcloud-browser-api%2Ffile-downloads%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Perplexity ](https://www.perplexity.ai/search/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fcloud-browser-api%2Ffile-downloads%20so%20I%20can%20ask%20questions%20about%20it.) 

 

 

  **Beta Feature:** Cloud Browser is currently in beta and available to staff members only. 

 Cloud Browser supports automatic file download handling through a custom **CDP extension**. When the browser triggers a file download (PDFs, images, spreadsheets, etc.), you can retrieve the downloaded files using special CDP commands.

## How It Works

 When a navigation or user action triggers a file download in the browser:

1. The browser intercepts the download request
2. The file is saved to a temporary directory on the remote browser
3. Download events are emitted so you can track progress
4. You can retrieve the file content using the `ScrapiumBrowser.getDownloads` command
 
##### Key Points

- Files are returned as **base64-encoded strings**
- Maximum download size is **25 MB** per file
- Downloads are automatically cleaned up when the session ends
- **Multiple file downloads** are fully supported in a single session
- Browser download popups and confirmations are **automatically bypassed** for seamless automation
 
 

 

  **Automation-Friendly:** Cloud Browser automatically disables browser download protection mechanisms (such as multiple download confirmation popups) that would normally interrupt automation scripts. Your downloads will proceed without any user interaction required. 

## CDP Commands

 Cloud Browser extends the standard CDP protocol with custom `ScrapiumBrowser` commands for file download handling:

 | Command | Description |
|---|---|
| `ScrapiumBrowser.getDownloads` | Retrieve all downloaded files as base64-encoded content. Optionally delete files after retrieval. |
| `ScrapiumBrowser.getDownloadsMetadatas` | Get metadata (filename and size) for all downloaded files without retrieving content. |

 

### ScrapiumBrowser.getDownloads

 Retrieves all files currently in the download directory, encoded as base64 strings. This command takes no parameters.

#### Response

 ```
{
  "files": {
    "document.pdf": "JVBERi0xLjQKJe...",
    "report.xlsx": "UEsDBBQAAAA..."
  }
}
```

 

   

 

### ScrapiumBrowser.getDownloadsMetadatas

 Retrieves metadata for all downloaded files without transferring the actual content. Useful for checking what files are available before downloading.

#### Response

 ```
{
  "metadata": {
    "document.pdf": 245760,
    "report.xlsx": 102400
  }
}
```

 

   

 

## Code Examples

Download a file after clicking a download button:

    Puppeteer    Playwright JS    Python    Python Async  

  ```
const puppeteer = require('puppeteer-core');

const API_KEY = '';
const BROWSER_WS = `wss://browser.scrapfly.io?api_key=${API_KEY}&proxy_pool=public_datacenter_pool`;

async function downloadFile() {
    const browser = await puppeteer.connect({ browserWSEndpoint: BROWSER_WS });
    const page = await browser.newPage();

    // Navigate and trigger download
    await page.goto('https://web-scraping.dev/file-download');
    await page.click('#download-btn');
    await new Promise(resolve => setTimeout(resolve, 3000));

    // Get the CDP session and retrieve downloads
    const client = await page.createCDPSession();
    const result = await client.send('ScrapiumBrowser.getDownloads');

    for (const [filename, base64Content] of Object.entries(result.files)) {
        const buffer = Buffer.from(base64Content, 'base64');
        require('fs').writeFileSync(filename, buffer);
        console.log(`Saved: ${filename} (${buffer.length} bytes)`);
    }
    await browser.close();
}

downloadFile();
```

 

   

 

 

 ```
const { chromium } = require('playwright');

const API_KEY = '';
const BROWSER_WS = `wss://browser.scrapfly.io?api_key=${API_KEY}&proxy_pool=public_datacenter_pool`;

async function downloadFile() {
    const browser = await chromium.connectOverCDP(BROWSER_WS);
    const context = browser.contexts()[0];
    const page = context.pages()[0] || await context.newPage();
    const cdpSession = await page.context().newCDPSession(page);

    // Navigate and trigger download
    await page.goto('https://web-scraping.dev/file-download');
    await page.click('#download-btn');
    await page.waitForTimeout(3000);

    // Retrieve and save files
    const downloads = await cdpSession.send('ScrapiumBrowser.getDownloads');
    for (const [filename, base64Content] of Object.entries(downloads.files)) {
        const buffer = Buffer.from(base64Content, 'base64');
        require('fs').writeFileSync(filename, buffer);
        console.log(`Saved ${filename} (${buffer.length} bytes)`);
    }
    await browser.close();
}

downloadFile();
```

 

   

 

 

 ```
import base64
from playwright.sync_api import sync_playwright

API_KEY = ''
BROWSER_WS = f'wss://browser.scrapfly.io?api_key={API_KEY}&proxy_pool=public_datacenter_pool'

def download_file():
    with sync_playwright() as p:
        browser = p.chromium.connect_over_cdp(BROWSER_WS)
        context = browser.contexts[0]
        page = context.pages[0] if context.pages else context.new_page()
        cdp_session = context.new_cdp_session(page)

        # Navigate and trigger download
        page.goto('https://web-scraping.dev/file-download')
        page.click('#download-btn')
        page.wait_for_timeout(3000)

        # Retrieve and save files
        downloads = cdp_session.send('ScrapiumBrowser.getDownloads')
        for filename, base64_content in downloads['files'].items():
            file_bytes = base64.b64decode(base64_content)
            with open(filename, 'wb') as f:
                f.write(file_bytes)
            print(f'Saved {filename} ({len(file_bytes)} bytes)')
        browser.close()

download_file()
```

 

   

 

 

 ```
import asyncio
import base64
from playwright.async_api import async_playwright

API_KEY = ''
BROWSER_WS = f'wss://browser.scrapfly.io?api_key={API_KEY}&proxy_pool=public_datacenter_pool'

async def download_file():
    async with async_playwright() as p:
        browser = await p.chromium.connect_over_cdp(BROWSER_WS)
        context = browser.contexts[0]
        page = context.pages[0] if context.pages else await context.new_page()
        cdp_session = await context.new_cdp_session(page)

        # Navigate and trigger download
        await page.goto('https://web-scraping.dev/file-download')
        await page.click('#download-btn')
        await page.wait_for_timeout(3000)

        # Retrieve and save files
        downloads = await cdp_session.send('ScrapiumBrowser.getDownloads')
        for filename, base64_content in downloads['files'].items():
            file_bytes = base64.b64decode(base64_content)
            with open(filename, 'wb') as f:
                f.write(file_bytes)
            print(f'Saved {filename} ({len(file_bytes)} bytes)')
        await browser.close()

asyncio.run(download_file())
```

 

   

 

 

 

## Monitoring Download Progress

 Cloud Browser emits standard Chrome CDP events that you can listen to for tracking download progress:

 | Event | Description |
|---|---|
| `Browser.downloadWillBegin` | Fired when a download is about to start. Contains URL and suggested filename. |
| `Browser.downloadProgress` | Fired periodically during download. Contains progress state and bytes received. |

 

### Listening to Download Events

 ```
const puppeteer = require('puppeteer-core');

const API_KEY = '';
const BROWSER_WS = `wss://browser.scrapfly.io?api_key=${API_KEY}&proxy_pool=public_datacenter_pool`;

async function monitorDownloads() {
    const browser = await puppeteer.connect({
        browserWSEndpoint: BROWSER_WS,
    });

    const page = await browser.newPage();
    const client = await page.createCDPSession();

    // Listen for download start
    client.on('Browser.downloadWillBegin', (event) => {
        console.log('Download starting:', {
            url: event.url,
            filename: event.suggestedFilename,
            guid: event.guid
        });
    });

    // Listen for download progress
    client.on('Browser.downloadProgress', (event) => {
        console.log('Download progress:', {
            guid: event.guid,
            state: event.state,  // 'inProgress', 'completed', 'canceled'
            receivedBytes: event.receivedBytes,
            totalBytes: event.totalBytes
        });

        if (event.state === 'completed') {
            console.log(`Download completed: ${event.receivedBytes} bytes`);
        }
    });

    // Navigate and trigger download
    await page.goto('https://web-scraping.dev/file-download');
    await page.click('#download-btn');

    // Wait for download to complete
    await new Promise(resolve => setTimeout(resolve, 5000));

    // Retrieve the file
    const result = await client.send('ScrapiumBrowser.getDownloads');
    console.log('Downloaded files:', Object.keys(result.files));

    await browser.close();
}

monitorDownloads();
```

 

   

 

## Multiple File Downloads

 Cloud Browser fully supports downloading multiple files in a single session. Unlike regular browsers that may prompt for confirmation when triggering multiple downloads, Cloud Browser automatically accepts all downloads without interruption.

### Example: Downloading Multiple Files

 ```
const puppeteer = require('puppeteer-core');

const API_KEY = '';
const BROWSER_WS = `wss://browser.scrapfly.io?api_key=${API_KEY}&proxy_pool=public_datacenter_pool`;

async function downloadMultipleFiles() {
    const browser = await puppeteer.connect({
        browserWSEndpoint: BROWSER_WS,
    });

    const page = await browser.newPage();
    const client = await page.createCDPSession();

    await page.goto('https://web-scraping.dev/file-download');

    // Trigger multiple downloads - no confirmation popups will appear
    await page.click('#download-btn');
    await page.click('#download-pdf');
    await page.click('#download-csv');

    // Wait for all downloads to complete
    await new Promise(resolve => setTimeout(resolve, 5000));

    // Retrieve all downloaded files at once
    const result = await client.send('ScrapiumBrowser.getDownloads');

    console.log(`Downloaded ${Object.keys(result.files).length} files:`);
    for (const [filename, base64Content] of Object.entries(result.files)) {
        const buffer = Buffer.from(base64Content, 'base64');
        require('fs').writeFileSync(filename, buffer);
        console.log(`  - ${filename} (${buffer.length} bytes)`);
    }

    await browser.close();
}

downloadMultipleFiles();
```

 

   

 

  **Tip:** When downloading multiple files, you can use `getDownloadsMetadatas` to check how many files are ready before retrieving them all with `getDownloads`. 

## Common Use Cases

##### PDF Downloads

 Download PDFs generated by web applications, such as invoices, reports, or tickets that are created dynamically after form submission or authentication.

 

 

 

##### Export Files

 Retrieve data exports (CSV, Excel, JSON) from dashboards and analytics platforms that require browser interaction to generate.

 

 

 

 

##### Generated Images

 Download images that are generated on-demand, such as charts, QR codes, or dynamically created graphics.

 

 

 

##### Protected Documents

 Access documents behind authentication or CAPTCHA protection that can only be downloaded through a real browser session.

 

 

 

 

## Best Practices

##### Wait for Downloads to Complete

 Always wait for the download to complete before calling `getDownloads`. You can either use a fixed timeout, listen for the `Browser.downloadProgress` event with state `completed`, or poll `getDownloadsMetadatas` until files appear.

 

 

##### Automatic Cleanup

 Downloaded files are **automatically cleaned up** when the browser session ends (either when you close the CDP connection or when the session times out with `auto_close=true`). You don't need to manually delete files unless you want to clear downloads during a long-running session.

 

 

##### Check File Size First

 For large files, call `getDownloadsMetadatas` first to check the file size. Remember that base64 encoding increases size by approximately 33%, so a 25 MB file will transfer as ~33 MB of base64 data.

 

 

##### Handle Download Failures

 Downloads can fail or be canceled. Listen to the `Browser.downloadProgress` event and check for `state: 'canceled'` or `state: 'failed'` to handle errors gracefully.

 

 

## Limitations

- **Maximum file size:** 25 MB per file
- **Session-scoped:** Downloads are only available during the session that triggered them
- **Auto-cleanup:** All downloads are deleted when the browser session ends
- **Transfer overhead:** Base64 encoding adds ~33% to transfer size
 
## Related Documentation

- [Cloud Browser Getting Started](https://scrapfly.io/docs/cloud-browser-api/getting-started) - Connection and basic usage
- [Puppeteer Integration](https://scrapfly.io/docs/cloud-browser-api/puppeteer) - Full Puppeteer setup guide
- [Playwright Integration](https://scrapfly.io/docs/cloud-browser-api/playwright) - Full Playwright setup guide
- [Cloud Browser Billing](https://scrapfly.io/docs/cloud-browser-api/billing) - Pricing and cost optimization
- [Error Reference](https://scrapfly.io/docs/cloud-browser-api/errors) - Troubleshooting guide