# Scrapfly Documentation

## Table of Contents

### Dashboard

- [Intro](https://scrapfly.io/docs)
- [Project](https://scrapfly.io/docs/project)
- [Account](https://scrapfly.io/docs/account)
- [Workspace & Team](https://scrapfly.io/docs/workspace-and-team)
- [Billing](https://scrapfly.io/docs/billing)

### Products

#### MCP Server

- [Getting Started](https://scrapfly.io/docs/mcp/getting-started)
- [Tools & API Spec](https://scrapfly.io/docs/mcp/tools)
- [Authentication](https://scrapfly.io/docs/mcp/authentication)
- [Examples & Use Cases](https://scrapfly.io/docs/mcp/examples)
- [FAQ](https://scrapfly.io/docs/mcp/faq)
##### Integrations

- [Overview](https://scrapfly.io/docs/mcp/integrations)
- [Claude Desktop](https://scrapfly.io/docs/mcp/integrations/claude-desktop)
- [Claude Code](https://scrapfly.io/docs/mcp/integrations/claude-code)
- [ChatGPT](https://scrapfly.io/docs/mcp/integrations/chatgpt)
- [Cursor](https://scrapfly.io/docs/mcp/integrations/cursor)
- [Cline](https://scrapfly.io/docs/mcp/integrations/cline)
- [Windsurf](https://scrapfly.io/docs/mcp/integrations/windsurf)
- [Zed](https://scrapfly.io/docs/mcp/integrations/zed)
- [Roo Code](https://scrapfly.io/docs/mcp/integrations/roo-code)
- [VS Code](https://scrapfly.io/docs/mcp/integrations/vscode)
- [LangChain](https://scrapfly.io/docs/mcp/integrations/langchain)
- [LlamaIndex](https://scrapfly.io/docs/mcp/integrations/llamaindex)
- [CrewAI](https://scrapfly.io/docs/mcp/integrations/crewai)
- [OpenAI](https://scrapfly.io/docs/mcp/integrations/openai)
- [n8n](https://scrapfly.io/docs/mcp/integrations/n8n)
- [Make](https://scrapfly.io/docs/mcp/integrations/make)
- [Zapier](https://scrapfly.io/docs/mcp/integrations/zapier)
- [Vapi AI](https://scrapfly.io/docs/mcp/integrations/vapi)
- [Agent Builder](https://scrapfly.io/docs/mcp/integrations/agent-builder)
- [Custom Client](https://scrapfly.io/docs/mcp/integrations/custom-client)


#### Web Scraping API

- [Getting Started](https://scrapfly.io/docs/scrape-api/getting-started)
- [API Specification]()
- [Monitoring](https://scrapfly.io/docs/monitoring)
- [Customize Request](https://scrapfly.io/docs/scrape-api/custom)
- [Debug](https://scrapfly.io/docs/scrape-api/debug)
- [Anti Scraping Protection](https://scrapfly.io/docs/scrape-api/anti-scraping-protection)
- [Proxy](https://scrapfly.io/docs/scrape-api/proxy)
- [Proxy Mode](https://scrapfly.io/docs/scrape-api/proxy-mode)
- [Proxy Mode - Screaming Frog](https://scrapfly.io/docs/scrape-api/proxy-mode/screaming-frog)
- [Proxy Mode - Apify](https://scrapfly.io/docs/scrape-api/proxy-mode/apify)
- [(Auto) Data Extraction](https://scrapfly.io/docs/scrape-api/extraction)
- [Javascript Rendering](https://scrapfly.io/docs/scrape-api/javascript-rendering)
- [Javascript Scenario](https://scrapfly.io/docs/scrape-api/javascript-scenario)
- [SSL](https://scrapfly.io/docs/scrape-api/ssl)
- [DNS](https://scrapfly.io/docs/scrape-api/dns)
- [Cache](https://scrapfly.io/docs/scrape-api/cache)
- [Batch (Multi-URL Scraping)](https://scrapfly.io/docs/scrape-api/batch)
- [Session](https://scrapfly.io/docs/scrape-api/session)
- [Webhook](https://scrapfly.io/docs/scrape-api/webhook)
- [Screenshot](https://scrapfly.io/docs/scrape-api/screenshot)
- [Errors](https://scrapfly.io/docs/scrape-api/errors)
- [Timeout](https://scrapfly.io/docs/scrape-api/understand-timeout)
- [Throttling](https://scrapfly.io/docs/throttling)
- [Troubleshoot](https://scrapfly.io/docs/scrape-api/troubleshoot)
- [Billing](https://scrapfly.io/docs/scrape-api/billing)
- [FAQ](https://scrapfly.io/docs/scrape-api/faq)

#### Crawler API

- [Getting Started](https://scrapfly.io/docs/crawler-api/getting-started)
- [API Specification]()
- [Retrieving Results](https://scrapfly.io/docs/crawler-api/results)
- [WARC Format](https://scrapfly.io/docs/crawler-api/warc-format)
- [Data Extraction](https://scrapfly.io/docs/crawler-api/extraction-rules)
- [Webhook](https://scrapfly.io/docs/crawler-api/webhook)
- [Billing](https://scrapfly.io/docs/crawler-api/billing)
- [Errors](https://scrapfly.io/docs/crawler-api/errors)
- [Troubleshoot](https://scrapfly.io/docs/crawler-api/troubleshoot)
- [FAQ](https://scrapfly.io/docs/crawler-api/faq)

#### Screenshot API

- [Getting Started](https://scrapfly.io/docs/screenshot-api/getting-started)
- [API Specification]()
- [Accessibility Testing](https://scrapfly.io/docs/screenshot-api/accessibility)
- [Webhook](https://scrapfly.io/docs/screenshot-api/webhook)
- [Billing](https://scrapfly.io/docs/screenshot-api/billing)
- [Errors](https://scrapfly.io/docs/screenshot-api/errors)

#### Extraction API

- [Getting Started](https://scrapfly.io/docs/extraction-api/getting-started)
- [API Specification]()
- [Rules Template](https://scrapfly.io/docs/extraction-api/rules-and-template)
- [LLM Extraction](https://scrapfly.io/docs/extraction-api/llm-prompt)
- [AI Auto Extraction](https://scrapfly.io/docs/extraction-api/automatic-ai)
- [Webhook](https://scrapfly.io/docs/extraction-api/webhook)
- [Billing](https://scrapfly.io/docs/extraction-api/billing)
- [Errors](https://scrapfly.io/docs/extraction-api/errors)
- [FAQ](https://scrapfly.io/docs/extraction-api/faq)

#### Proxy Saver

- [Getting Started](https://scrapfly.io/docs/proxy-saver/getting-started)
- [Fingerprints](https://scrapfly.io/docs/proxy-saver/fingerprints)
- [Optimizations](https://scrapfly.io/docs/proxy-saver/optimizations)
- [SSL Certificates](https://scrapfly.io/docs/proxy-saver/certificates)
- [Protocols](https://scrapfly.io/docs/proxy-saver/protocols)
- [Pacfile](https://scrapfly.io/docs/proxy-saver/pacfile)
- [Secure Credentials](https://scrapfly.io/docs/proxy-saver/security)
- [Billing](https://scrapfly.io/docs/proxy-saver/billing)

#### Cloud Browser API

- [Getting Started](https://scrapfly.io/docs/cloud-browser-api/getting-started)
- [Proxy & Geo-Targeting](https://scrapfly.io/docs/cloud-browser-api/proxy)
- [Unblock API](https://scrapfly.io/docs/cloud-browser-api/unblock)
- [File Downloads](https://scrapfly.io/docs/cloud-browser-api/file-downloads)
- [Session Resume](https://scrapfly.io/docs/cloud-browser-api/session-resume)
- [Human-in-the-Loop](https://scrapfly.io/docs/cloud-browser-api/human-in-the-loop)
- [Debug Mode](https://scrapfly.io/docs/cloud-browser-api/debug-mode)
- [Bring Your Own Proxy](https://scrapfly.io/docs/cloud-browser-api/bring-your-own-proxy)
- [Browser Extensions](https://scrapfly.io/docs/cloud-browser-api/extensions)
- [Native Browser MCP](https://scrapfly.io/docs/cloud-browser-api/mcp)
- [DevTools Protocol](https://scrapfly.io/docs/cloud-browser-api/cdp-reference)
##### Integrations

- [Puppeteer](https://scrapfly.io/docs/cloud-browser-api/puppeteer)
- [Playwright](https://scrapfly.io/docs/cloud-browser-api/playwright)
- [Selenium](https://scrapfly.io/docs/cloud-browser-api/selenium)
- [Vercel Agent Browser](https://scrapfly.io/docs/cloud-browser-api/agent-browser)
- [Browser Use](https://scrapfly.io/docs/cloud-browser-api/browser-use)
- [Stagehand](https://scrapfly.io/docs/cloud-browser-api/stagehand)

- [Billing](https://scrapfly.io/docs/cloud-browser-api/billing)
- [Errors](https://scrapfly.io/docs/cloud-browser-api/errors)


### Tools

- [Antibot Detector](https://scrapfly.io/docs/tools/antibot-detector)

### SDK

- [Golang](https://scrapfly.io/docs/sdk/golang)
- [Python](https://scrapfly.io/docs/sdk/python)
- [Rust](https://scrapfly.io/docs/sdk/rust)
- [TypeScript](https://scrapfly.io/docs/sdk/typescript)
- [Scrapy](https://scrapfly.io/docs/sdk/scrapy)

### Integrations

- [Getting Started](https://scrapfly.io/docs/integration/getting-started)
- [LangChain](https://scrapfly.io/docs/integration/langchain)
- [LlamaIndex](https://scrapfly.io/docs/integration/llamaindex)
- [CrewAI](https://scrapfly.io/docs/integration/crewai)
- [Zapier](https://scrapfly.io/docs/integration/zapier)
- [Make](https://scrapfly.io/docs/integration/make)
- [n8n](https://scrapfly.io/docs/integration/n8n)

### Academy

- [Overview](https://scrapfly.io/academy)
- [Web Scraping Overview](https://scrapfly.io/academy/scraping-overview)
- [Tools](https://scrapfly.io/academy/tools-overview)
- [Reverse Engineering](https://scrapfly.io/academy/reverse-engineering)
- [Static Scraping](https://scrapfly.io/academy/static-scraping)
- [HTML Parsing](https://scrapfly.io/academy/html-parsing)
- [Dynamic Scraping](https://scrapfly.io/academy/dynamic-scraping)
- [Hidden API Scraping](https://scrapfly.io/academy/hidden-api-scraping)
- [Headless Browsers](https://scrapfly.io/academy/headless-browsers)
- [Hidden Web Data](https://scrapfly.io/academy/hidden-web-data)
- [JSON Parsing](https://scrapfly.io/academy/json-parsing)
- [Data Processing](https://scrapfly.io/academy/data-processing)
- [Scaling](https://scrapfly.io/academy/scaling)
- [Walkthrough Summary](https://scrapfly.io/academy/walkthrough-summary)
- [Scraper Blocking](https://scrapfly.io/academy/scraper-blocking)
- [Proxies](https://scrapfly.io/academy/proxies)

---

# Classify API

 [  View as markdown ](https://scrapfly.io/?view=markdown)   Copy for LLM    Copy for LLM  [     Open in ChatGPT ](https://chatgpt.com/?hints=search&prompt=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fscrape-api%2Fclassify%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Claude ](https://claude.ai/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fscrape-api%2Fclassify%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Perplexity ](https://www.perplexity.ai/search/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fscrape-api%2Fclassify%20so%20I%20can%20ask%20questions%20about%20it.) 

 

 

 The **Classify API** runs the same anti-bot detection pipeline used by every live Scrapfly scrape against an HTTP response you already have. You pass in a URL, status code, headers and body; it returns whether the target blocked the request and the name of the anti-bot product that matched (Cloudflare, DataDome, PerimeterX, Akamai, Kasada, Imperva, AWS WAF, F5 Shape, and more).

 Use it when you've fetched a page through your own infrastructure (custom proxy, cached scrape, third-party system) and want a continuously-maintained verdict on whether it's a real response or a block page, without paying for a new scrape.

## When to use Classify

- **Verify a response from your own fetch**: paste the response you got back into `/classify` and let Scrapfly decide if it's blocked. Faster and cheaper than re-scraping just to check.
- **Decide whether to retry with Scrapfly's ASP**: if Classify flags the response as blocked, enable `asp=true` on the next call to `/scrape`, or escalate to `browser_unblock`.
- **Audit cached or archived pages**: spot block pages that snuck into your cache so you can refresh them cleanly.
- **Power local tooling**: the Scrapfly MCP server's `check_if_blocked` tool uses this endpoint under the hood, so AI agents get an authoritative answer instead of a stale local heuristic.
 
## Request

### Endpoint

`POST /classify?key={API_KEY}`

### Body

A JSON object describing a single HTTP response:

- `url` (string, **required**): the final URL the response came from.
- `status_code` (integer, **required**): the HTTP status code (100-599) of the response you're classifying.
- `headers` (object, optional): response headers, case-insensitive. Detection relies heavily on signals like `server`, `cf-ray`, `cf-mitigated`, `x-datadome`, `x-akamai-session-info`, etc. Pass everything you have for best accuracy.
- `body` (string, optional): the response body as text. HTML challenge pages often carry the most reliable signal (`Just a moment...`, `_cf_chl_opt`, `cd.kasada.io`, etc.). Binary payloads can be passed as an empty string.
- `method` (string, optional, default `GET`): the HTTP method the caller used.
 
 ```
POST /classify?key={API_KEY}
Content-Type: application/json

{
  "url": "https://target.example.com/product/42",
  "status_code": 403,
  "headers": {
    "server": "cloudflare",
    "cf-ray": "8c0e0a0b0c0d0e0f-DFW",
    "cf-mitigated": "challenge"
  },
  "body": "<title>Just a moment...</title>...",
  "method": "GET"
}
```

 

   

 

### Headers

- `Content-Type: application/json`: required.
 
### Body size limit

 The request body is capped at **2 MiB**. Oversize requests are rejected with `ERR::SCRAPE::CLASSIFY_CONFIG` (HTTP 413). If you're classifying unusually large HTML, trim to the first couple hundred KiB. Detection signatures always live near the top of the body.

## Example call

- curl
- Python
- TypeScript
- Go
- Rust
 
 ```
curl -X POST \
  'https://api.scrapfly.io/classify?key={{ YOUR_API_KEY }}' \
  -H 'Content-Type: application/json' \
  -d '{
    "url": "https://target.example.com/",
    "status_code": 403,
    "headers": {"server": "cloudflare", "cf-mitigated": "challenge"},
    "body": "...Just a moment...
```











<html><p>"
  }'
        
        </p><div class="extra">
            <a data-toggle="tooltip" data-placement="top" title="Copy to clipboard" data-clipboard-text="curl -X POST \
  'https://api.scrapfly.io/classify?key={{ YOUR_API_KEY }}' \
  -H 'Content-Type: application/json' \
  -d '{
    "url": "https://target.example.com/",
    "status_code": 403,
    "headers": {"server": "cloudflare", "cf-mitigated": "challenge"},
    "body": "...Just a moment..."
  }'"><i class="fas fa-clipboard"></i></a>
            <a data-toggle="tooltip" data-placement="top" title="Expand as fullscreen" data-expand="#code-826a34"><i class="fas fa-expand"></i></a>
        </div>
    



                

                <div class="tab-pane fade " id="ce-python" role="tabpanel">
                        
    
        
            
    <div class="expandable">
        <div class="tab-pane active" role="tabpanel">
            <pre id="code-6cab19"><code class="hljs python">import requests
from scrapfly import ScrapflyClient

client = ScrapflyClient(key="")

# Fetch the response yourself (or load it from your cache/proxy).
response = requests.get("https://target.example.com/")

result = client.classify(
    url="https://target.example.com/",
    status_code=response.status_code,
    headers=dict(response.headers),
    body=response.text,
)

print(result.blocked)   # True / False
print(result.antibot)   # "cloudflare" | "datadome" | ... | None
print(result.cost)      # 1</code></pre>
        </div>
        <div class="extra">
            <a data-toggle="tooltip" data-placement="top" title="Copy to clipboard" data-clipboard-text='import requests
from scrapfly import ScrapflyClient

client = ScrapflyClient(key="")

# Fetch the response yourself (or load it from your cache/proxy).
response = requests.get("https://target.example.com/")

result = client.classify(
    url="https://target.example.com/",
    status_code=response.status_code,
    headers=dict(response.headers),
    body=response.text,
)

print(result.blocked)   # True / False
print(result.antibot)   # "cloudflare" | "datadome" | ... | None
print(result.cost)      # 1'><i class="fas fa-clipboard"></i></a>
            <a data-toggle="tooltip" data-placement="top" title="Expand as fullscreen" data-expand="#code-6cab19"><i class="fas fa-expand"></i></a>
        </div>
    </div>



                </div>

                <div class="tab-pane fade " id="ce-typescript" role="tabpanel">
                        
    
        
            
    <div class="expandable">
        <div class="tab-pane active" role="tabpanel">
            <pre id="code-248bc7"><code class="hljs typescript">import { ScrapflyClient } from "scrapfly-sdk";

const client = new ScrapflyClient({ key: "" });

// Fetch the response yourself (or load it from your cache/proxy).
const upstream = await fetch("https://target.example.com/");
const body = await upstream.text();

const headers: Record<string string> = {};
upstream.headers.forEach((value, key) => {
    headers[key] = value;
});

const result = await client.classify({
    url: "https://target.example.com/",
    statusCode: upstream.status,
    headers,
    body,
});

console.log(result.blocked);  // true / false
console.log(result.antibot);  // "cloudflare" | "datadome" | ... | null
console.log(result.cost);     // 1</string></code></pre>
        </div>
        <div class="extra">
            <a data-toggle="tooltip" data-placement="top" title="Copy to clipboard" data-clipboard-text='import { ScrapflyClient } from "scrapfly-sdk";

const client = new ScrapflyClient({ key: "" });

// Fetch the response yourself (or load it from your cache/proxy).
const upstream = await fetch("https://target.example.com/");
const body = await upstream.text();

const headers: Record = {};
upstream.headers.forEach((value, key) => {
    headers[key] = value;
});

const result = await client.classify({
    url: "https://target.example.com/",
    statusCode: upstream.status,
    headers,
    body,
});

console.log(result.blocked);  // true / false
console.log(result.antibot);  // "cloudflare" | "datadome" | ... | null
console.log(result.cost);     // 1'><i class="fas fa-clipboard"></i></a>
            <a data-toggle="tooltip" data-placement="top" title="Expand as fullscreen" data-expand="#code-248bc7"><i class="fas fa-expand"></i></a>
        </div>
    </div>



                </div>

                <div class="tab-pane fade " id="ce-go" role="tabpanel">
                        
    
        
            
    <div class="expandable">
        <div class="tab-pane active" role="tabpanel">
            <pre id="code-2c2dea"><code class="hljs go">package main

import (
    "context"
    "fmt"
    "io"
    "log"
    "net/http"

    scrapfly "github.com/scrapfly/go-scrapfly"
)

func main() {
    client, err := scrapfly.New("")
    if err != nil {
        log.Fatal(err)
    }

    // Fetch the response yourself (or load it from your cache/proxy).
    resp, err := http.Get("https://target.example.com/")
    if err != nil {
        log.Fatal(err)
    }
    defer resp.Body.Close()
    body, _ := io.ReadAll(resp.Body)

    headers := map[string]string{}
    for k, v := range resp.Header {
        if len(v) > 0 {
            headers[k] = v[0]
        }
    }

    result, err := client.Classify(context.Background(), &scrapfly.ClassifyRequest{
        URL:        "https://target.example.com/",
        StatusCode: resp.StatusCode,
        Headers:    headers,
        Body:       string(body),
    })
    if err != nil {
        log.Fatal(err)
    }

    fmt.Println(result.Blocked)  // true / false
    fmt.Println(result.Antibot)  // "cloudflare" | "datadome" | ... | ""
    fmt.Println(result.Cost)     // 1
}</code></pre>
        </div>
        <div class="extra">
            <a data-toggle="tooltip" data-placement="top" title="Copy to clipboard" data-clipboard-text='package main

import (
    "context"
    "fmt"
    "io"
    "log"
    "net/http"

    scrapfly "github.com/scrapfly/go-scrapfly"
)

func main() {
    client, err := scrapfly.New("")
    if err != nil {
        log.Fatal(err)
    }

    // Fetch the response yourself (or load it from your cache/proxy).
    resp, err := http.Get("https://target.example.com/")
    if err != nil {
        log.Fatal(err)
    }
    defer resp.Body.Close()
    body, _ := io.ReadAll(resp.Body)

    headers := map[string]string{}
    for k, v := range resp.Header {
        if len(v) > 0 {
            headers[k] = v[0]
        }
    }

    result, err := client.Classify(context.Background(), &scrapfly.ClassifyRequest{
        URL:        "https://target.example.com/",
        StatusCode: resp.StatusCode,
        Headers:    headers,
        Body:       string(body),
    })
    if err != nil {
        log.Fatal(err)
    }

    fmt.Println(result.Blocked)  // true / false
    fmt.Println(result.Antibot)  // "cloudflare" | "datadome" | ... | ""
    fmt.Println(result.Cost)     // 1
}'><i class="fas fa-clipboard"></i></a>
            <a data-toggle="tooltip" data-placement="top" title="Expand as fullscreen" data-expand="#code-2c2dea"><i class="fas fa-expand"></i></a>
        </div>
    </div>



                </div>

                <div class="tab-pane fade " id="ce-rust" role="tabpanel">
                        
    
        
            
    <div class="expandable">
        <div class="tab-pane active" role="tabpanel">
            <pre id="code-df8a0e"><code class="hljs rust">use scrapfly_sdk::result::classify::ClassifyRequest;
use scrapfly_sdk::Client;
use std::collections::HashMap;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::error>> {
    let client = Client::builder()
        .api_key("")
        .build()?;

    // Fetch the response yourself (or load it from your cache/proxy).
    let resp = reqwest::get("https://target.example.com/").await?;
    let status = resp.status().as_u16();
    let mut headers: HashMap<string string> = HashMap::new();
    for (k, v) in resp.headers() {
        if let Ok(s) = v.to_str() {
            headers.insert(k.as_str().to_string(), s.to_string());
        }
    }
    let body = resp.text().await?;

    let result = client
        .classify(&ClassifyRequest {
            url: "https://target.example.com/".into(),
            status_code: status,
            headers: Some(headers),
            body: Some(body),
            method: None,
        })
        .await?;

    println!("{}", result.blocked);            // true / false
    println!("{:?}", result.antibot);          // Some("cloudflare") | ... | None
    println!("{}", result.cost);               // 1
    Ok(())
}</string></dyn></code></pre>
        </div>
        <div class="extra">
            <a data-toggle="tooltip" data-placement="top" title="Copy to clipboard" data-clipboard-text='use scrapfly_sdk::result::classify::ClassifyRequest;
use scrapfly_sdk::Client;
use std::collections::HashMap;

#[tokio::main]
async fn main() -> Result {
    let client = Client::builder()
        .api_key("")
        .build()?;

    // Fetch the response yourself (or load it from your cache/proxy).
    let resp = reqwest::get("https://target.example.com/").await?;
    let status = resp.status().as_u16();
    let mut headers: HashMap = HashMap::new();
    for (k, v) in resp.headers() {
        if let Ok(s) = v.to_str() {
            headers.insert(k.as_str().to_string(), s.to_string());
        }
    }
    let body = resp.text().await?;

    let result = client
        .classify(&ClassifyRequest {
            url: "https://target.example.com/".into(),
            status_code: status,
            headers: Some(headers),
            body: Some(body),
            method: None,
        })
        .await?;

    println!("{}", result.blocked);            // true / false
    println!("{:?}", result.antibot);          // Some("cloudflare") | ... | None
    println!("{}", result.cost);               // 1
    Ok(())
}'><i class="fas fa-clipboard"></i></a>
            <a data-toggle="tooltip" data-placement="top" title="Expand as fullscreen" data-expand="#code-df8a0e"><i class="fas fa-expand"></i></a>
        </div>
    </div>



                </div>
            

            <h2>Response</h2>

            <p>
                The response is a JSON object describing the verdict. The <code>blocked</code> field is the headline answer; the rest provide the evidence.
            </p>

            <ul class="nav nav-tabs mt-4 mb-1" id="classify-response-tabs" role="tablist">
                <li class="nav-item">
                    <a class="nav-link active" data-toggle="tab" data-target="#cr-blocked" role="tab">Blocked response</a>
                </li>
                <li class="nav-item">
                    <a class="nav-link " data-toggle="tab" data-target="#cr-clean" role="tab">Clean response (not blocked)</a>
                </li>
            </ul>
            <div class="tab-content border border-top-0 p-3 mb-4" id="classify-response-content">
                <div class="tab-pane fade active show" id="cr-blocked" role="tabpanel">
                        
    
        
            
    <div class="expandable">
        <div class="tab-pane active" role="tabpanel">
            <pre id="code-299c6e"><code class="hljs http">HTTP/2 200
Content-Type: application/json
X-Scrapfly-Api-Cost: 1
X-Scrapfly-Remaining-Api-Credit: 4999999

{
  "blocked": true,
  "antibot": "cloudflare",
  "cost": 1
}</code></pre>
        </div>
        <div class="extra">
            <a data-toggle="tooltip" data-placement="top" title="Copy to clipboard" data-clipboard-text='HTTP/2 200
Content-Type: application/json
X-Scrapfly-Api-Cost: 1
X-Scrapfly-Remaining-Api-Credit: 4999999

{
  "blocked": true,
  "antibot": "cloudflare",
  "cost": 1
}'><i class="fas fa-clipboard"></i></a>
            <a data-toggle="tooltip" data-placement="top" title="Expand as fullscreen" data-expand="#code-299c6e"><i class="fas fa-expand"></i></a>
        </div>
    </div>



                </div>
                <div class="tab-pane fade " id="cr-clean" role="tabpanel">
                        
    
        
            
    <div class="expandable">
        <div class="tab-pane active" role="tabpanel">
            <pre id="code-af295e"><code class="hljs http">HTTP/2 200
Content-Type: application/json
X-Scrapfly-Api-Cost: 1

{
  "blocked": false,
  "antibot": null,
  "cost": 1
}</code></pre>
        </div>
        <div class="extra">
            <a data-toggle="tooltip" data-placement="top" title="Copy to clipboard" data-clipboard-text='HTTP/2 200
Content-Type: application/json
X-Scrapfly-Api-Cost: 1

{
  "blocked": false,
  "antibot": null,
  "cost": 1
}'><i class="fas fa-clipboard"></i></a>
            <a data-toggle="tooltip" data-placement="top" title="Expand as fullscreen" data-expand="#code-af295e"><i class="fas fa-expand"></i></a>
        </div>
    </div>



                </div>
            </div>

            <h3>Fields</h3>

            <ul>
                <li><code>blocked</code> (boolean): <code>true</code> when Scrapfly detected that the upstream response is an anti-bot block page.</li>
                <li><code>antibot</code> (string, nullable): the name of the anti-bot product that matched, when one was detected. Examples: <code>cloudflare</code>, <code>datadome</code>, <code>perimeterx</code>, <code>akamai</code>, <code>kasada</code>, <code>imperva</code>, <code>aws_waf</code>, <code>f5_shape</code>. <code>null</code> when <code>blocked</code> is <code>false</code>.</li>
                <li><code>cost</code> (integer): API credits charged for this call.</li>
            </ul>

            <h2>Pricing & Billing</h2>

            <p>
                Each successful call costs <strong>1 API credit</strong>, regardless of payload size or shield count. The charge is reported in the
                <code>X-Scrapfly-Api-Cost</code> response header and recorded under the Web Scraping API product, the same bucket as
                <code>/scrape</code> calls.
            </p>

            <p>
                Errors (HTTP 4xx/5xx) are <strong>not billed</strong>. Invalid URL, missing <code>status_code</code>, oversized body, and backend
                failures all return without charging.
            </p>

            <h2>Errors</h2>

            <p>
                All Classify failures return <code>ERR::SCRAPE::CLASSIFY_CONFIG</code>. The HTTP status and <code>message</code> field
                describe the specific cause:
            </p>

            <ul>
                <li>HTTP 400: request body is not valid JSON.</li>
                <li>HTTP 413: request body exceeds 2 MiB.</li>
                <li>HTTP 422: <code>url</code> missing, not absolute, or has no host; <code>status_code</code> outside 100-599.</li>
                <li>HTTP 503: backend temporarily unavailable; safe to retry.</li>
            </ul>

            <h2>See also</h2>

            <ul>
                <li><a href="/docs/scrape-api/anti-scraping-protection">Anti-Scraping Protection (ASP)</a>: the same shield pipeline applied proactively during a scrape.</li>
                <li><a href="/docs/scrape-api/getting-started">Web Scraping API Getting Started</a>: the main <code>/scrape</code> endpoint you should call when Classify says <code>blocked: true</code>.</li>
                <li><a href="/docs/scrape-api/batch">Batch (Multi-URL Scraping)</a>: when you need to scrape many URLs at once.</li>
            </ul>
        
    