# Scrapfly Documentation

## Table of Contents

### Dashboard

- [Intro](https://scrapfly.io/docs)
- [Project](https://scrapfly.io/docs/project)
- [Account](https://scrapfly.io/docs/account)
- [Workspace & Team](https://scrapfly.io/docs/workspace-and-team)
- [Billing](https://scrapfly.io/docs/billing)

### Products

#### MCP Server

- [Getting Started](https://scrapfly.io/docs/mcp/getting-started)
- [Tools & API Spec](https://scrapfly.io/docs/mcp/tools)
- [Authentication](https://scrapfly.io/docs/mcp/authentication)
- [Examples & Use Cases](https://scrapfly.io/docs/mcp/examples)
- [FAQ](https://scrapfly.io/docs/mcp/faq)
##### Integrations

- [Overview](https://scrapfly.io/docs/mcp/integrations)
- [Claude Desktop](https://scrapfly.io/docs/mcp/integrations/claude-desktop)
- [Claude Code](https://scrapfly.io/docs/mcp/integrations/claude-code)
- [ChatGPT](https://scrapfly.io/docs/mcp/integrations/chatgpt)
- [Cursor](https://scrapfly.io/docs/mcp/integrations/cursor)
- [Cline](https://scrapfly.io/docs/mcp/integrations/cline)
- [Windsurf](https://scrapfly.io/docs/mcp/integrations/windsurf)
- [Zed](https://scrapfly.io/docs/mcp/integrations/zed)
- [Roo Code](https://scrapfly.io/docs/mcp/integrations/roo-code)
- [VS Code](https://scrapfly.io/docs/mcp/integrations/vscode)
- [LangChain](https://scrapfly.io/docs/mcp/integrations/langchain)
- [LlamaIndex](https://scrapfly.io/docs/mcp/integrations/llamaindex)
- [CrewAI](https://scrapfly.io/docs/mcp/integrations/crewai)
- [OpenAI](https://scrapfly.io/docs/mcp/integrations/openai)
- [n8n](https://scrapfly.io/docs/mcp/integrations/n8n)
- [Make](https://scrapfly.io/docs/mcp/integrations/make)
- [Zapier](https://scrapfly.io/docs/mcp/integrations/zapier)
- [Vapi AI](https://scrapfly.io/docs/mcp/integrations/vapi)
- [Agent Builder](https://scrapfly.io/docs/mcp/integrations/agent-builder)
- [Custom Client](https://scrapfly.io/docs/mcp/integrations/custom-client)


#### Web Scraping API

- [Getting Started](https://scrapfly.io/docs/scrape-api/getting-started)
- [API Specification]()
- [Monitoring](https://scrapfly.io/docs/monitoring)
- [Customize Request](https://scrapfly.io/docs/scrape-api/custom)
- [Debug](https://scrapfly.io/docs/scrape-api/debug)
- [Anti Scraping Protection](https://scrapfly.io/docs/scrape-api/anti-scraping-protection)
- [Proxy](https://scrapfly.io/docs/scrape-api/proxy)
- [Proxy Mode](https://scrapfly.io/docs/scrape-api/proxy-mode)
- [Proxy Mode - Screaming Frog](https://scrapfly.io/docs/scrape-api/proxy-mode/screaming-frog)
- [Proxy Mode - Apify](https://scrapfly.io/docs/scrape-api/proxy-mode/apify)
- [(Auto) Data Extraction](https://scrapfly.io/docs/scrape-api/extraction)
- [Javascript Rendering](https://scrapfly.io/docs/scrape-api/javascript-rendering)
- [Javascript Scenario](https://scrapfly.io/docs/scrape-api/javascript-scenario)
- [SSL](https://scrapfly.io/docs/scrape-api/ssl)
- [DNS](https://scrapfly.io/docs/scrape-api/dns)
- [Cache](https://scrapfly.io/docs/scrape-api/cache)
- [Batch (Multi-URL Scraping)](https://scrapfly.io/docs/scrape-api/batch)
- [Session](https://scrapfly.io/docs/scrape-api/session)
- [Webhook](https://scrapfly.io/docs/scrape-api/webhook)
- [Screenshot](https://scrapfly.io/docs/scrape-api/screenshot)
- [Errors](https://scrapfly.io/docs/scrape-api/errors)
- [Timeout](https://scrapfly.io/docs/scrape-api/understand-timeout)
- [Throttling](https://scrapfly.io/docs/throttling)
- [Troubleshoot](https://scrapfly.io/docs/scrape-api/troubleshoot)
- [Billing](https://scrapfly.io/docs/scrape-api/billing)
- [FAQ](https://scrapfly.io/docs/scrape-api/faq)

#### Crawler API

- [Getting Started](https://scrapfly.io/docs/crawler-api/getting-started)
- [API Specification]()
- [Retrieving Results](https://scrapfly.io/docs/crawler-api/results)
- [WARC Format](https://scrapfly.io/docs/crawler-api/warc-format)
- [Data Extraction](https://scrapfly.io/docs/crawler-api/extraction-rules)
- [Webhook](https://scrapfly.io/docs/crawler-api/webhook)
- [Billing](https://scrapfly.io/docs/crawler-api/billing)
- [Errors](https://scrapfly.io/docs/crawler-api/errors)
- [Troubleshoot](https://scrapfly.io/docs/crawler-api/troubleshoot)
- [FAQ](https://scrapfly.io/docs/crawler-api/faq)

#### Screenshot API

- [Getting Started](https://scrapfly.io/docs/screenshot-api/getting-started)
- [API Specification]()
- [Accessibility Testing](https://scrapfly.io/docs/screenshot-api/accessibility)
- [Webhook](https://scrapfly.io/docs/screenshot-api/webhook)
- [Billing](https://scrapfly.io/docs/screenshot-api/billing)
- [Errors](https://scrapfly.io/docs/screenshot-api/errors)

#### Extraction API

- [Getting Started](https://scrapfly.io/docs/extraction-api/getting-started)
- [API Specification]()
- [Rules Template](https://scrapfly.io/docs/extraction-api/rules-and-template)
- [LLM Extraction](https://scrapfly.io/docs/extraction-api/llm-prompt)
- [AI Auto Extraction](https://scrapfly.io/docs/extraction-api/automatic-ai)
- [Webhook](https://scrapfly.io/docs/extraction-api/webhook)
- [Billing](https://scrapfly.io/docs/extraction-api/billing)
- [Errors](https://scrapfly.io/docs/extraction-api/errors)
- [FAQ](https://scrapfly.io/docs/extraction-api/faq)

#### Proxy Saver

- [Getting Started](https://scrapfly.io/docs/proxy-saver/getting-started)
- [Fingerprints](https://scrapfly.io/docs/proxy-saver/fingerprints)
- [Optimizations](https://scrapfly.io/docs/proxy-saver/optimizations)
- [SSL Certificates](https://scrapfly.io/docs/proxy-saver/certificates)
- [Protocols](https://scrapfly.io/docs/proxy-saver/protocols)
- [Pacfile](https://scrapfly.io/docs/proxy-saver/pacfile)
- [Secure Credentials](https://scrapfly.io/docs/proxy-saver/security)
- [Billing](https://scrapfly.io/docs/proxy-saver/billing)

#### Cloud Browser API

- [Getting Started](https://scrapfly.io/docs/cloud-browser-api/getting-started)
- [Proxy & Geo-Targeting](https://scrapfly.io/docs/cloud-browser-api/proxy)
- [Unblock API](https://scrapfly.io/docs/cloud-browser-api/unblock)
- [File Downloads](https://scrapfly.io/docs/cloud-browser-api/file-downloads)
- [Session Resume](https://scrapfly.io/docs/cloud-browser-api/session-resume)
- [Human-in-the-Loop](https://scrapfly.io/docs/cloud-browser-api/human-in-the-loop)
- [Debug Mode](https://scrapfly.io/docs/cloud-browser-api/debug-mode)
- [Bring Your Own Proxy](https://scrapfly.io/docs/cloud-browser-api/bring-your-own-proxy)
- [Browser Extensions](https://scrapfly.io/docs/cloud-browser-api/extensions)
- [Native Browser MCP](https://scrapfly.io/docs/cloud-browser-api/mcp)
- [DevTools Protocol](https://scrapfly.io/docs/cloud-browser-api/cdp-reference)
##### Integrations

- [Puppeteer](https://scrapfly.io/docs/cloud-browser-api/puppeteer)
- [Playwright](https://scrapfly.io/docs/cloud-browser-api/playwright)
- [Selenium](https://scrapfly.io/docs/cloud-browser-api/selenium)
- [Vercel Agent Browser](https://scrapfly.io/docs/cloud-browser-api/agent-browser)
- [Browser Use](https://scrapfly.io/docs/cloud-browser-api/browser-use)
- [Stagehand](https://scrapfly.io/docs/cloud-browser-api/stagehand)

- [Billing](https://scrapfly.io/docs/cloud-browser-api/billing)
- [Errors](https://scrapfly.io/docs/cloud-browser-api/errors)


### Tools

- [Antibot Detector](https://scrapfly.io/docs/tools/antibot-detector)

### SDK

- [Golang](https://scrapfly.io/docs/sdk/golang)
- [Python](https://scrapfly.io/docs/sdk/python)
- [Rust](https://scrapfly.io/docs/sdk/rust)
- [TypeScript](https://scrapfly.io/docs/sdk/typescript)
- [Scrapy](https://scrapfly.io/docs/sdk/scrapy)

### Integrations

- [Getting Started](https://scrapfly.io/docs/integration/getting-started)
- [LangChain](https://scrapfly.io/docs/integration/langchain)
- [LlamaIndex](https://scrapfly.io/docs/integration/llamaindex)
- [CrewAI](https://scrapfly.io/docs/integration/crewai)
- [Zapier](https://scrapfly.io/docs/integration/zapier)
- [Make](https://scrapfly.io/docs/integration/make)
- [n8n](https://scrapfly.io/docs/integration/n8n)

### Academy

- [Overview](https://scrapfly.io/academy)
- [Web Scraping Overview](https://scrapfly.io/academy/scraping-overview)
- [Tools](https://scrapfly.io/academy/tools-overview)
- [Reverse Engineering](https://scrapfly.io/academy/reverse-engineering)
- [Static Scraping](https://scrapfly.io/academy/static-scraping)
- [HTML Parsing](https://scrapfly.io/academy/html-parsing)
- [Dynamic Scraping](https://scrapfly.io/academy/dynamic-scraping)
- [Hidden API Scraping](https://scrapfly.io/academy/hidden-api-scraping)
- [Headless Browsers](https://scrapfly.io/academy/headless-browsers)
- [Hidden Web Data](https://scrapfly.io/academy/hidden-web-data)
- [JSON Parsing](https://scrapfly.io/academy/json-parsing)
- [Data Processing](https://scrapfly.io/academy/data-processing)
- [Scaling](https://scrapfly.io/academy/scaling)
- [Walkthrough Summary](https://scrapfly.io/academy/walkthrough-summary)
- [Scraper Blocking](https://scrapfly.io/academy/scraper-blocking)
- [Proxies](https://scrapfly.io/academy/proxies)

---

# Batch Scraping API

 [  View as markdown ](https://scrapfly.io/?view=markdown)   Copy for LLM    Copy for LLM  [     Open in ChatGPT ](https://chatgpt.com/?hints=search&prompt=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fscrape-api%2Fbatch%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Claude ](https://claude.ai/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fscrape-api%2Fbatch%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Perplexity ](https://www.perplexity.ai/search/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fscrape-api%2Fbatch%20so%20I%20can%20ask%20questions%20about%20it.) 

 

 

 The **Batch Scraping API** accepts up to **100 scrape configurations** in a single HTTP request and streams each result back as soon as it is ready. The wire format is `multipart/mixed`, with one HTTP part per scrape, so a client can start consuming results at the speed of the *fastest* scrape in the batch rather than waiting on the slowest.

 The Batch API is available to all paid plans. It is **not available on the FREE plan** (requests are rejected with `ERR::SCRAPE::BATCH_CONFIG`, HTTP 402).

## When to use Batch

 Use `/scrape/batch` when you have many URLs to scrape at once and want to amortize the cost of N individual API round-trips. Compared to fanning out N calls to `/scrape` on the client side, a single batch request gives you:

- **One TLS handshake** and one auth check for the entire batch instead of N.
- **Atomic concurrency reservation**: if your account's remaining concurrency is less than the number of non-webhook configs in the batch, the *whole batch* is rejected with HTTP 429 before any scrape is executed. No partial successes to reconcile.
- **Streaming results**: results arrive one at a time as each scrape completes, so the wall time of the batch matches the wall time of the *slowest* scrape rather than the sum of all scrapes.
- **Uniform framing**: each part body is identical in shape to a single `/scrape` response, so your SDK can reuse its existing response decoder.
 
## Request

### Endpoint

`POST /scrape/batch?key={API_KEY}`

### Body

 A JSON object with a single `configs` field containing an array of 1 to 100 scrape configurations. Each entry is a flat dictionary of the same query parameters that `/scrape` accepts (`url`, `country`, `asp`, `render_js`, `correlation_id`, `headers[Authorization]`, and so on). See the [ Web Scraping API Getting Started](https://scrapfly.io/docs/scrape-api/getting-started) page for the full list.

 Per-entry `correlation_id` is **required** in batch context, unlike the single-scrape endpoint where it's optional. Parts arrive out of order (streamed as each scrape completes), so the correlation ID is the only reliable way to match a part back to its originating config. Every config in a batch must carry a unique correlation ID.

 ```
POST /scrape/batch?key={API_KEY}
Content-Type: application/json
Accept: application/json
Accept-Encoding: gzip

{
  "configs": [
    {
      "url": "https://httpbin.dev/get?a=1",
      "correlation_id": "job-1"
    },
    {
      "url": "https://httpbin.dev/get?b=2",
      "correlation_id": "job-2",
      "country": "us"
    },
    {
      "url": "https://httpbin.dev/get?c=3",
      "correlation_id": "job-3",
      "asp": "true"
    }
  ]
}
```

 

   

 

### Headers

- `Content-Type: application/json`: required.
- `Accept: application/json` (default) or `application/msgpack`: picks the per-part body format, identical to the `/scrape` content-type negotiation.
- `Accept-Encoding: gzip`: enables envelope-level gzip compression. The streaming invariant is preserved: the compressor emits a flush frame after each part so the client still sees parts as they arrive.
 
### Body size limit

 The batch body is capped at **10 MiB**. Configurations larger than this should be split into multiple batches.

## Response

 On success the response is `Content-Type: multipart/mixed; boundary=<auto-generated>` with `Transfer-Encoding: chunked`. Each part represents exactly one scrape:

 ```
HTTP/2 200
Content-Type: multipart/mixed; boundary=batch-abc123
Content-Encoding: gzip
X-Scrapfly-Batch-Uuid: <uuid>
X-Accel-Buffering: no
Transfer-Encoding: chunked

--batch-abc123
Content-Type: application/json
Content-Length: 6109
X-Scrapfly-Correlation-Id: job-1
X-Scrapfly-Scrape-Status: 200
X-Scrapfly-Log-Uuid: 01KPJXAJKEP62HD8YW48G3FF6Z

{ ...standard ScrapeResult JSON envelope (same shape as a single /scrape response)... }
--batch-abc123
Content-Type: application/json
Content-Length: 5873
X-Scrapfly-Correlation-Id: job-2
X-Scrapfly-Scrape-Status: 200
X-Scrapfly-Log-Uuid: 01KPJXAJKET0XGT1ESZBZRST28

{ ... }
--batch-abc123
Content-Type: application/json
Content-Length: 482
X-Scrapfly-Correlation-Id: job-3
X-Scrapfly-Scrape-Status: 422

{
  "code": "ERR::SCRAPE::ASP_SHIELD_PROTECTION_FAILED",
  "message": "...",
  "http_code": 422,
  "retryable": false,
  "links": { ... }
}
--batch-abc123--</uuid>
```

 

   

 

### Per-part headers

- `Content-Type`: `application/json` or `application/msgpack`, matching the request's `Accept` header.
- `Content-Length`: exact body length for this part; SDKs should prefer this over boundary-scanning to preserve per-part streaming.
- `X-Scrapfly-Correlation-Id`: the `correlation_id` set on the originating config. Your SDK uses this to match the part back to the caller's input.
- `X-Scrapfly-Scrape-Status`: the HTTP status this scrape would have returned as a stand-alone `/scrape` call (200 on success, 4xx/5xx on scrape-level failure, 202 on webhook enqueue).
- `X-Scrapfly-Log-Uuid`: the scrape log UUID for traceability.
- `X-Scrapfly-Webhook-Enqueue: true`: set only on parts for webhook enqueue configs; the body is an enqueue acknowledgement rather than a scrape result. The scrape result will be delivered later via your webhook endpoint.
 
### Per-part body

 Each successful part body is identical in shape to a single `/scrape` response, with the same `config`, `context`, `result`, and `uuid` fields. A per-scrape failure emits the standard Scrapfly error envelope (`code`, `message`, `http_code`, `retryable`, `links`) as the part body with the corresponding non-2xx `X-Scrapfly-Scrape-Status`.

### Streaming invariant

 A single failing scrape does **NOT** fail the whole batch. Every config emits its own part, successful or not. The only way to get a non-200 top-level HTTP response is a batch-level failure (plan gate, validation, insufficient concurrency); per-scrape failures land as individual parts with their own `X-Scrapfly-Scrape-Status`.

## Concurrency reservation

 The batch endpoint reserves concurrency **atomically** for every non-webhook config before execution starts. If your account's remaining concurrency is less than the number of non-webhook configs in the batch, the entire batch is rejected with `HTTP 429 ERR::SCRAPE::BATCH_CONFIG` and zero scrapes are executed. The response carries `X-Scrapfly-Batch-Requested` and `X-Scrapfly-Batch-Available` headers (plus a `Retry-After`) so clients can size a retry intelligently.

 **Webhook configs** (configs with a `webhook_name`) don't count against synchronous concurrency. They are enqueued to the existing webhook worker and the corresponding part in the response is an immediate enqueue acknowledgement (HTTP 202). The scrape result is delivered later via your webhook endpoint as it would be from a single `/scrape` call.

## Errors

 All batch-level failures return `ERR::SCRAPE::BATCH_CONFIG` in the standard Scrapfly error envelope. The HTTP status and `message` field describe the specific cause:

- HTTP 402: calling key is on the FREE plan.
- HTTP 400: the `configs` array is empty, missing, or exceeds 100 entries.
- HTTP 422: a config is missing its `correlation_id`, or two configs share the same `correlation_id`.
- HTTP 429: not enough account or project concurrency to reserve every non-webhook config atomically. Retry with a smaller batch or wait for in-flight scrapes to complete.
 
## Consuming a batch response with `curl`

 `curl -N` (the `--no-buffer` flag) disables curl's built-in output buffering so parts print to stdout as they arrive. With `--compressed`, curl transparently handles the envelope-level gzip decompression.

 ```
curl -N -X POST \
  'https://api.scrapfly.io/scrape/batch?key={{ YOUR_API_KEY }}' \
  -H 'Content-Type: application/json' \
  -H 'Accept: application/json' \
  -H 'Accept-Encoding: gzip' \
  --data-binary @batch.json
```

 

   

 

## SDK support

 The batch endpoint is exposed by every Scrapfly SDK as `scrape_batch()` (`scrapeBatch` in TypeScript / `ScrapeBatch` in Go). Each SDK handles the streaming multipart parsing for you and returns an iterator / async stream of `(correlation_id, ScrapeResult | ScrapflyError)` tuples as parts arrive. See the [Python](https://scrapfly.io/docs/sdk/python), [TypeScript](https://scrapfly.io/docs/sdk/typescript), [Go](https://scrapfly.io/docs/sdk/golang), and [Rust](https://scrapfly.io/docs/sdk) SDK docs for usage examples.

### Example: Geo-targeted batch with `country`

 Use the `country` parameter to route each scrape through a proxy in a specific country. Every config in the batch can target a different country.

- Python
- TypeScript
- Go
- Rust
 
 ```
from scrapfly import ScrapflyClient, ScrapeConfig

client = ScrapflyClient(api_key="")

configs = [
    ScrapeConfig(url="https://httpbin.dev/get?job=1", country="us", correlation_id="job-1"),
    ScrapeConfig(url="https://httpbin.dev/get?job=2", country="us", correlation_id="job-2"),
]

for correlation_id, result in client.scrape_batch(configs):
    print(f"{correlation_id}: {result.upstream_status_code}")
```

 

   

 

 

 ```
import { ScrapflyClient, ScrapeConfig } from "@scrapfly/scrapfly-client";

const client = new ScrapflyClient({ key: "" });

const configs = [
    new ScrapeConfig({ url: "https://httpbin.dev/get?job=1", country: "us", correlation_id: "job-1" }),
    new ScrapeConfig({ url: "https://httpbin.dev/get?job=2", country: "us", correlation_id: "job-2" }),
];

for await (const [correlationId, result] of client.scrapeBatch(configs)) {
    console.log(`${correlationId}: ${result.result.status_code}`);
}
```

 

   

 

 

 ```
package main

import (
    "fmt"
    "log"

    scrapfly "github.com/scrapfly/go-scrapfly"
)

func main() {
    client, err := scrapfly.New("")
    if err != nil {
        log.Fatal(err)
    }

    configs := []*scrapfly.ScrapeConfig{
        {URL: "https://httpbin.dev/get?job=1", Country: "us", CorrelationID: "job-1"},
        {URL: "https://httpbin.dev/get?job=2", Country: "us", CorrelationID: "job-2"},
    }

    ch, err := client.ScrapeBatch(configs)
    if err != nil {
        log.Fatal(err)
    }

    for result := range ch {
        if result.Err != nil {
            fmt.Printf("%s: error: %v
", result.CorrelationID, result.Err)
        } else {
            fmt.Printf("%s: %d
", result.CorrelationID, result.Result.Result.StatusCode)
        }
    }
}
```

 

   

 

 

 ```
use futures_util::stream::StreamExt;
use scrapfly_sdk::{Client, ScrapeConfig};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::error="">> {
    let client = Client::builder()
        .api_key("")
        .build()?;

    let configs = vec![
        ScrapeConfig::builder("https://httpbin.dev/get?job=1")
            .correlation_id("job-1")
            .country("us")
            .build()?,
        ScrapeConfig::builder("https://httpbin.dev/get?job=2")
            .correlation_id("job-2")
            .country("us")
            .build()?,
    ];

    let mut stream = client.scrape_batch(&configs).await?;

    while let Some((correlation_id, result)) = stream.next().await {
        match result {
            Ok(r) => println!("{}: {}", correlation_id, r.result.status_code),
            Err(e) => println!("{}: error: {}", correlation_id, e),
        }
    }

    Ok(())
}</dyn>
```

 

   

 

 

 

### Example: Residential proxy pool with `proxy_pool`

 Use the `proxy_pool` parameter to route scrapes through a specific proxy pool. `public_residential_pool` routes traffic through residential IPs, which is useful for sites that block datacenter traffic.

- Python
- TypeScript
- Go
- Rust
 
 ```
from scrapfly import ScrapflyClient, ScrapeConfig

client = ScrapflyClient(api_key="")

configs = [
    ScrapeConfig(
        url="https://httpbin.dev/get?job=1",
        proxy_pool="public_residential_pool",
        correlation_id="job-1",
    ),
    ScrapeConfig(
        url="https://httpbin.dev/get?job=2",
        proxy_pool="public_residential_pool",
        correlation_id="job-2",
    ),
]

for correlation_id, result in client.scrape_batch(configs):
    print(f"{correlation_id}: {result.upstream_status_code}")
```

 

   

 

 

 ```
import { ScrapflyClient, ScrapeConfig } from "@scrapfly/scrapfly-client";

const client = new ScrapflyClient({ key: "" });

const configs = [
    new ScrapeConfig({
        url: "https://httpbin.dev/get?job=1",
        proxy_pool: "public_residential_pool",
        correlation_id: "job-1",
    }),
    new ScrapeConfig({
        url: "https://httpbin.dev/get?job=2",
        proxy_pool: "public_residential_pool",
        correlation_id: "job-2",
    }),
];

for await (const [correlationId, result] of client.scrapeBatch(configs)) {
    console.log(`${correlationId}: ${result.result.status_code}`);
}
```

 

   

 

 

 ```
package main

import (
    "fmt"
    "log"

    scrapfly "github.com/scrapfly/go-scrapfly"
)

func main() {
    client, err := scrapfly.New("")
    if err != nil {
        log.Fatal(err)
    }

    configs := []*scrapfly.ScrapeConfig{
        {
            URL:           "https://httpbin.dev/get?job=1",
            ProxyPool:     scrapfly.PublicResidentialPool,
            CorrelationID: "job-1",
        },
        {
            URL:           "https://httpbin.dev/get?job=2",
            ProxyPool:     scrapfly.PublicResidentialPool,
            CorrelationID: "job-2",
        },
    }

    ch, err := client.ScrapeBatch(configs)
    if err != nil {
        log.Fatal(err)
    }

    for result := range ch {
        if result.Err != nil {
            fmt.Printf("%s: error: %v
", result.CorrelationID, result.Err)
        } else {
            fmt.Printf("%s: %d
", result.CorrelationID, result.Result.Result.StatusCode)
        }
    }
}
```

 

   

 

 

 ```
use futures_util::stream::StreamExt;
use scrapfly_sdk::{Client, ProxyPool, ScrapeConfig};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::error="">> {
    let client = Client::builder()
        .api_key("")
        .build()?;

    let configs = vec![
        ScrapeConfig::builder("https://httpbin.dev/get?job=1")
            .correlation_id("job-1")
            .proxy_pool(ProxyPool::PublicResidentialPool)
            .build()?,
        ScrapeConfig::builder("https://httpbin.dev/get?job=2")
            .correlation_id("job-2")
            .proxy_pool(ProxyPool::PublicResidentialPool)
            .build()?,
    ];

    let mut stream = client.scrape_batch(&configs).await?;

    while let Some((correlation_id, result)) = stream.next().await {
        match result {
            Ok(r) => println!("{}: {}", correlation_id, r.result.status_code),
            Err(e) => println!("{}: error: {}", correlation_id, e),
        }
    }

    Ok(())
}</dyn>
```

 

   

 

 

 

### Example: JavaScript rendering with `render_js`

 Use `render_js=true` to run each scrape in a headless browser. This is needed for pages that load content dynamically via JavaScript. Render-JS scrapes count as browser scrapes and use more concurrency.

- Python
- TypeScript
- Go
- Rust
 
 ```
from scrapfly import ScrapflyClient, ScrapeConfig

client = ScrapflyClient(api_key="")

configs = [
    ScrapeConfig(url="https://web-scraping.dev/product/1", render_js=True, correlation_id="job-1"),
    ScrapeConfig(url="https://web-scraping.dev/product/2", render_js=True, correlation_id="job-2"),
]

for correlation_id, result in client.scrape_batch(configs):
    print(f"{correlation_id}: {result.upstream_status_code}")
```

 

   

 

 

 ```
import { ScrapflyClient, ScrapeConfig } from "@scrapfly/scrapfly-client";

const client = new ScrapflyClient({ key: "" });

const configs = [
    new ScrapeConfig({ url: "https://web-scraping.dev/product/1", render_js: true, correlation_id: "job-1" }),
    new ScrapeConfig({ url: "https://web-scraping.dev/product/2", render_js: true, correlation_id: "job-2" }),
];

for await (const [correlationId, result] of client.scrapeBatch(configs)) {
    console.log(`${correlationId}: ${result.result.status_code}`);
}
```

 

   

 

 

 ```
package main

import (
    "fmt"
    "log"

    scrapfly "github.com/scrapfly/go-scrapfly"
)

func main() {
    client, err := scrapfly.New("")
    if err != nil {
        log.Fatal(err)
    }

    configs := []*scrapfly.ScrapeConfig{
        {URL: "https://web-scraping.dev/product/1", RenderJS: true, CorrelationID: "job-1"},
        {URL: "https://web-scraping.dev/product/2", RenderJS: true, CorrelationID: "job-2"},
    }

    ch, err := client.ScrapeBatch(configs)
    if err != nil {
        log.Fatal(err)
    }

    for result := range ch {
        if result.Err != nil {
            fmt.Printf("%s: error: %v
", result.CorrelationID, result.Err)
        } else {
            fmt.Printf("%s: %d
", result.CorrelationID, result.Result.Result.StatusCode)
        }
    }
}
```

 

   

 

 

 ```
use futures_util::stream::StreamExt;
use scrapfly_sdk::{Client, ScrapeConfig};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::error="">> {
    let client = Client::builder()
        .api_key("")
        .build()?;

    let configs = vec![
        ScrapeConfig::builder("https://web-scraping.dev/product/1")
            .correlation_id("job-1")
            .render_js(true)
            .build()?,
        ScrapeConfig::builder("https://web-scraping.dev/product/2")
            .correlation_id("job-2")
            .render_js(true)
            .build()?,
    ];

    let mut stream = client.scrape_batch(&configs).await?;

    while let Some((correlation_id, result)) = stream.next().await {
        match result {
            Ok(r) => println!("{}: {}", correlation_id, r.result.status_code),
            Err(e) => println!("{}: error: {}", correlation_id, e),
        }
    }

    Ok(())
}</dyn>
```

 

   

 

 

 

## Key properties (summary)

- Max 100 configs per batch; 10 MiB max body.
- Per-entry `correlation_id` is required and must be unique within the batch.
- Response is `multipart/mixed`; parts arrive out of order as each scrape completes.
- Envelope-level gzip/zstd compression preserves end-to-end streaming.
- Per-part body shape is identical to a single `/scrape` response.
- Concurrency is reserved atomically; insufficient concurrency fails the whole batch (no partial execution).
- Webhook configs don't count against sync concurrency and emit an enqueue-ack part immediately.
- A per-scrape failure does not fail the whole batch; each config gets its own part.
- Not available on the FREE plan.