# Scrapfly Documentation

## Table of Contents

### Dashboard

- [Intro](https://scrapfly.io/docs)
- [Project](https://scrapfly.io/docs/project)
- [Account](https://scrapfly.io/docs/account)
- [Workspace & Team](https://scrapfly.io/docs/workspace-and-team)
- [Billing](https://scrapfly.io/docs/billing)

### Products

#### MCP Server

- [Getting Started](https://scrapfly.io/docs/mcp/getting-started)
- [Tools & API Spec](https://scrapfly.io/docs/mcp/tools)
- [Authentication](https://scrapfly.io/docs/mcp/authentication)
- [Examples & Use Cases](https://scrapfly.io/docs/mcp/examples)
- [FAQ](https://scrapfly.io/docs/mcp/faq)
##### Integrations

- [Overview](https://scrapfly.io/docs/mcp/integrations)
- [Claude Desktop](https://scrapfly.io/docs/mcp/integrations/claude-desktop)
- [Claude Code](https://scrapfly.io/docs/mcp/integrations/claude-code)
- [ChatGPT](https://scrapfly.io/docs/mcp/integrations/chatgpt)
- [Cursor](https://scrapfly.io/docs/mcp/integrations/cursor)
- [Cline](https://scrapfly.io/docs/mcp/integrations/cline)
- [Windsurf](https://scrapfly.io/docs/mcp/integrations/windsurf)
- [Zed](https://scrapfly.io/docs/mcp/integrations/zed)
- [Roo Code](https://scrapfly.io/docs/mcp/integrations/roo-code)
- [VS Code](https://scrapfly.io/docs/mcp/integrations/vscode)
- [LangChain](https://scrapfly.io/docs/mcp/integrations/langchain)
- [LlamaIndex](https://scrapfly.io/docs/mcp/integrations/llamaindex)
- [CrewAI](https://scrapfly.io/docs/mcp/integrations/crewai)
- [OpenAI](https://scrapfly.io/docs/mcp/integrations/openai)
- [n8n](https://scrapfly.io/docs/mcp/integrations/n8n)
- [Make](https://scrapfly.io/docs/mcp/integrations/make)
- [Zapier](https://scrapfly.io/docs/mcp/integrations/zapier)
- [Vapi AI](https://scrapfly.io/docs/mcp/integrations/vapi)
- [Agent Builder](https://scrapfly.io/docs/mcp/integrations/agent-builder)
- [Custom Client](https://scrapfly.io/docs/mcp/integrations/custom-client)


#### Web Scraping API

- [Getting Started](https://scrapfly.io/docs/scrape-api/getting-started)
- [API Specification]()
- [Monitoring](https://scrapfly.io/docs/monitoring)
- [Customize Request](https://scrapfly.io/docs/scrape-api/custom)
- [Debug](https://scrapfly.io/docs/scrape-api/debug)
- [Anti Scraping Protection](https://scrapfly.io/docs/scrape-api/anti-scraping-protection)
- [Proxy](https://scrapfly.io/docs/scrape-api/proxy)
- [Proxy Mode](https://scrapfly.io/docs/scrape-api/proxy-mode)
- [Proxy Mode - Screaming Frog](https://scrapfly.io/docs/scrape-api/proxy-mode/screaming-frog)
- [Proxy Mode - Apify](https://scrapfly.io/docs/scrape-api/proxy-mode/apify)
- [(Auto) Data Extraction](https://scrapfly.io/docs/scrape-api/extraction)
- [Javascript Rendering](https://scrapfly.io/docs/scrape-api/javascript-rendering)
- [Javascript Scenario](https://scrapfly.io/docs/scrape-api/javascript-scenario)
- [SSL](https://scrapfly.io/docs/scrape-api/ssl)
- [DNS](https://scrapfly.io/docs/scrape-api/dns)
- [Cache](https://scrapfly.io/docs/scrape-api/cache)
- [Session](https://scrapfly.io/docs/scrape-api/session)
- [Webhook](https://scrapfly.io/docs/scrape-api/webhook)
- [Screenshot](https://scrapfly.io/docs/scrape-api/screenshot)
- [Errors](https://scrapfly.io/docs/scrape-api/errors)
- [Timeout](https://scrapfly.io/docs/scrape-api/understand-timeout)
- [Throttling](https://scrapfly.io/docs/throttling)
- [Troubleshoot](https://scrapfly.io/docs/scrape-api/troubleshoot)
- [Billing](https://scrapfly.io/docs/scrape-api/billing)
- [FAQ](https://scrapfly.io/docs/scrape-api/faq)

#### Crawler API

- [Getting Started](https://scrapfly.io/docs/crawler-api/getting-started)
- [API Specification]()
- [Retrieving Results](https://scrapfly.io/docs/crawler-api/results)
- [WARC Format](https://scrapfly.io/docs/crawler-api/warc-format)
- [Data Extraction](https://scrapfly.io/docs/crawler-api/extraction-rules)
- [Webhook](https://scrapfly.io/docs/crawler-api/webhook)
- [Billing](https://scrapfly.io/docs/crawler-api/billing)
- [Errors](https://scrapfly.io/docs/crawler-api/errors)
- [Troubleshoot](https://scrapfly.io/docs/crawler-api/troubleshoot)
- [FAQ](https://scrapfly.io/docs/crawler-api/faq)

#### Screenshot API

- [Getting Started](https://scrapfly.io/docs/screenshot-api/getting-started)
- [API Specification]()
- [Accessibility Testing](https://scrapfly.io/docs/screenshot-api/accessibility)
- [Webhook](https://scrapfly.io/docs/screenshot-api/webhook)
- [Billing](https://scrapfly.io/docs/screenshot-api/billing)
- [Errors](https://scrapfly.io/docs/screenshot-api/errors)

#### Extraction API

- [Getting Started](https://scrapfly.io/docs/extraction-api/getting-started)
- [API Specification]()
- [Rules Template](https://scrapfly.io/docs/extraction-api/rules-and-template)
- [LLM Extraction](https://scrapfly.io/docs/extraction-api/llm-prompt)
- [AI Auto Extraction](https://scrapfly.io/docs/extraction-api/automatic-ai)
- [Webhook](https://scrapfly.io/docs/extraction-api/webhook)
- [Billing](https://scrapfly.io/docs/extraction-api/billing)
- [Errors](https://scrapfly.io/docs/extraction-api/errors)
- [FAQ](https://scrapfly.io/docs/extraction-api/faq)

#### Proxy Saver

- [Getting Started](https://scrapfly.io/docs/proxy-saver/getting-started)
- [Fingerprints](https://scrapfly.io/docs/proxy-saver/fingerprints)
- [Optimizations](https://scrapfly.io/docs/proxy-saver/optimizations)
- [SSL Certificates](https://scrapfly.io/docs/proxy-saver/certificates)
- [Protocols](https://scrapfly.io/docs/proxy-saver/protocols)
- [Pacfile](https://scrapfly.io/docs/proxy-saver/pacfile)
- [Secure Credentials](https://scrapfly.io/docs/proxy-saver/security)
- [Billing](https://scrapfly.io/docs/proxy-saver/billing)

#### Cloud Browser API

- [Getting Started](https://scrapfly.io/docs/cloud-browser-api/getting-started)
- [Proxy & Geo-Targeting](https://scrapfly.io/docs/cloud-browser-api/proxy)
- [Unblock API](https://scrapfly.io/docs/cloud-browser-api/unblock)
- [File Downloads](https://scrapfly.io/docs/cloud-browser-api/file-downloads)
- [Session Resume](https://scrapfly.io/docs/cloud-browser-api/session-resume)
- [Human-in-the-Loop](https://scrapfly.io/docs/cloud-browser-api/human-in-the-loop)
- [Debug Mode](https://scrapfly.io/docs/cloud-browser-api/debug-mode)
- [Bring Your Own Proxy](https://scrapfly.io/docs/cloud-browser-api/bring-your-own-proxy)
- [Browser Extensions](https://scrapfly.io/docs/cloud-browser-api/extensions)
- [Native Browser MCP](https://scrapfly.io/docs/cloud-browser-api/mcp)
- [DevTools Protocol](https://scrapfly.io/docs/cloud-browser-api/cdp-reference)
##### Integrations

- [Puppeteer](https://scrapfly.io/docs/cloud-browser-api/puppeteer)
- [Playwright](https://scrapfly.io/docs/cloud-browser-api/playwright)
- [Selenium](https://scrapfly.io/docs/cloud-browser-api/selenium)
- [Vercel Agent Browser](https://scrapfly.io/docs/cloud-browser-api/agent-browser)
- [Browser Use](https://scrapfly.io/docs/cloud-browser-api/browser-use)
- [Stagehand](https://scrapfly.io/docs/cloud-browser-api/stagehand)

- [Billing](https://scrapfly.io/docs/cloud-browser-api/billing)
- [Errors](https://scrapfly.io/docs/cloud-browser-api/errors)


### Tools

- [Antibot Detector](https://scrapfly.io/docs/tools/antibot-detector)

### SDK

- [Golang](https://scrapfly.io/docs/sdk/golang)
- [Python](https://scrapfly.io/docs/sdk/python)
- [Rust](https://scrapfly.io/docs/sdk/rust)
- [TypeScript](https://scrapfly.io/docs/sdk/typescript)
- [Scrapy](https://scrapfly.io/docs/sdk/scrapy)

### Integrations

- [Getting Started](https://scrapfly.io/docs/integration/getting-started)
- [LangChain](https://scrapfly.io/docs/integration/langchain)
- [LlamaIndex](https://scrapfly.io/docs/integration/llamaindex)
- [CrewAI](https://scrapfly.io/docs/integration/crewai)
- [Zapier](https://scrapfly.io/docs/integration/zapier)
- [Make](https://scrapfly.io/docs/integration/make)
- [n8n](https://scrapfly.io/docs/integration/n8n)

### Academy

- [Overview](https://scrapfly.io/academy)
- [Web Scraping Overview](https://scrapfly.io/academy/scraping-overview)
- [Tools](https://scrapfly.io/academy/tools-overview)
- [Reverse Engineering](https://scrapfly.io/academy/reverse-engineering)
- [Static Scraping](https://scrapfly.io/academy/static-scraping)
- [HTML Parsing](https://scrapfly.io/academy/html-parsing)
- [Dynamic Scraping](https://scrapfly.io/academy/dynamic-scraping)
- [Hidden API Scraping](https://scrapfly.io/academy/hidden-api-scraping)
- [Headless Browsers](https://scrapfly.io/academy/headless-browsers)
- [Hidden Web Data](https://scrapfly.io/academy/hidden-web-data)
- [JSON Parsing](https://scrapfly.io/academy/json-parsing)
- [Data Processing](https://scrapfly.io/academy/data-processing)
- [Scaling](https://scrapfly.io/academy/scaling)
- [Walkthrough Summary](https://scrapfly.io/academy/walkthrough-summary)
- [Scraper Blocking](https://scrapfly.io/academy/scraper-blocking)
- [Proxies](https://scrapfly.io/academy/proxies)

---

#  Python SDK

 [  View as markdown ](https://scrapfly.io/?view=markdown)   Copy for LLM    Copy for LLM  [     Open in ChatGPT ](https://chatgpt.com/?hints=search&prompt=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fsdk%2Fpython%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Claude ](https://claude.ai/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fsdk%2Fpython%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Perplexity ](https://www.perplexity.ai/search/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fsdk%2Fpython%20so%20I%20can%20ask%20questions%20about%20it.) 

 

 

 Python SDK gives you a handy abstraction to interact with **Scrapfly API**. It includes all of scrapfly features and many convenient shortcuts:

- Automatic base64 encode of JS snippet
- Error Handling
- Body json encode if `Content-Type: application/json`
- Body URL encode and set `Content Type: application/x-www-form-urlencoded` if no content type specified
- Convert Binary response into a python `ByteIO` object
 
###   Step by Step Introduction 

For a hands-on introduction see our Scrapfly SDK introduction page!

 [ Discover Now  ](https://scrapfly.io/docs/onboarding) 

 

 

 The Full python API specification is available here: [https://scrapfly.github.io/python-scrapfly/docs/scrapfly](https://scrapfly.github.io/python-scrapfly/scrapfly/)

> For more on Python SDK use with Scrapfly, select "Python SDK" option in Scrapfly docs top bar.

## Installation

 Source code of **Python SDK** is available on [ Github](https://github.com/scrapfly/python-scrapfly) **scrapfly-sdk** package is available through [PyPi](https://pypi.org).

 ```
pip install 'scrapfly-sdk'
```

 

   

 

 You can also install extra package `scrapfly[speedups]` to get **[brotli](https://github.com/google/brotli)** compression and **[msgpack](https://msgpack.org)** serialization benefits.

 ```
pip install 'scrapfly-sdk[speedups]'
```

 

   

 

 You can also install `scrapfly[all]` to get all optional Scrapfly features without any extra impact on your scrapfly performance.

 ```
pip install 'scrapfly-sdk[all]'
```

 

   

 

## Scrape

> If you plan to scrape protected website - **make sure to enable [Anti Scraping Protection ](https://scrapfly.io/docs/onboarding#asp)**

 ```
from scrapfly import ScrapeConfig, ScrapflyClient, ScrapeApiResponse

scrapfly = ScrapflyClient(key='{{ YOUR_API_KEY }}')

api_response:ScrapeApiResponse = scrapfly.scrape(scrape_config=ScrapeConfig(url='https://httpbin.dev/anything'))

# Automatic retry errors marked "retryable" and wait delay recommended before retrying
api_response:ScrapeApiResponse = scrapfly.resilient_scrape(scrape_config=ScrapeConfig(url='https://httpbin.dev/anything'))

# Automatic retry error based on status code
api_response:ScrapeApiResponse = scrapfly.resilient_scrape(scrape_config=ScrapeConfig(url='https://httpbin.dev/status/500'), retry_on_status_code=[500])

# scrape result, content, iframes, response headers, response cookies states, screenshots, ssl, dns etc
print(api_response.scrape_result)

# html content
print(api_response.scrape_result['content'])

# Context of scrape, session, webhook, asp, cache, debug
print(api_response.context)

# raw api result
print(api_response.content)

# True if the scrape respond with >= 200 < 300 http status
print(api_response.success)

# Api status code /!\ Not the api status code of the scrape!
print(api_response.status_code)

# Upstream website status code
print(api_response.upstream_status_code)

# Convert API Scrape Result into well known requests.Response object
print(api_response.upstream_result_into_response())
```

 

   

 

 Discover python full specification:

- Client : <https://scrapfly.github.io/python-scrapfly/scrapfly/client.html>
- ScrapeConfig : [https://scrapfly.github.io/python-scrapfly/scrapfly/scrape\_config.html](https://scrapfly.github.io/python-scrapfly/scrapfly/scrape_config.html)
- API response : [https://scrapfly.github.io/python-scrapfly/scrapfly/api\_response.html](https://scrapfly.github.io/python-scrapfly/scrapfly/api_response.html)
 
### Using Context

 ```
from scrapfly import ScrapeConfig, ScrapflyClient, ScrapeApiResponse

scrapfly = ScrapflyClient(key='{{ YOUR_API_KEY }}')

with scrapfly as scraper:
    response: ScrapeApiResponse = scraper.scrape(ScrapeConfig(url='https://httpbin.dev/anything', country='fr'))
```

 

   

 

## How to configure Scrape Query

 You can check the `ScrapeConfig` implementation to check all available options [available here.](https://scrapfly.github.io/python-scrapfly/scrapfly/scrape_config.html)

All parameters listed in this documentation can be used when you construct the scrape config object.

## Download Binary Response

 ```
from scrapfly import ScrapflyClient, ScrapeApiResponse

api_response:ScrapeApiResponse = scrapfly.scrape(scrape_config=ScrapeConfig(url='https://www.intel.com/content/www/us/en/ethernet-controllers/82599-10-gbe-controller-datasheet.html'))
scrapfly.sink(api_response) # you can specify path and name via named arguments
```

 

   

 

## Error Handling

 Error handling is a big part of scraper, so we design a system to reflect what happened when it's going bad to handle it properly from Scraper. Here a simple snippet to handle errors on your owns

 ```
from scrapfly import ScrapeConfig, ScrapflyClient, ScrapeApiResponse, UpstreamHttpClientError, \
ScrapflyScrapeError, UpstreamHttpServerError


scrapfly = ScrapflyClient(key='{{ YOUR_API_KEY }}')

try:
    api_response:ScrapeApiResponse = scrapfly.scrape(scrape_config=ScrapeConfig(
        url='https://httpbin.dev/status/404',
    ))
except UpstreamHttpClientError as e: # HTTP 400 - 500
    print(e.api_response.scrape_result['error'])
    raise e
except UpstreamHttpServerError as e:  # HTTP >= 500
    print(e.api_response.scrape_result['error'])
    raise e
# UpstreamHttpError can be used to catch all related error regarding the upstream website
except ScrapflyScrapeError as e:
    print(e.message)
    print(e.code)
    raise e

```

 

   

 

 Errors with related code and explanation are documented and available [here](https://scrapfly.io/docs/scrape-api/errors), if you want to know more.

- [scrapfly.UpstreamHttpClientError](https://scrapfly.github.io/python-scrapfly/scrapfly/index.html#scrapfly.UpstreamHttpClientError) Upstream website that you scrape response with http code &gt;= 300 &lt; 400
- [scrapfly.UpstreamHttpServerError](https://scrapfly.github.io/python-scrapfly/scrapfly/index.html#scrapfly.UpstreamHttpServerError) Upstream website that you scrape response with http code &gt;= 500 &lt; 600
- [scrapfly.ApiHttpClientError](https://scrapfly.github.io/python-scrapfly/scrapfly/index.html#scrapfly.ApiHttpClientError) Scrapfly API respond with &gt;= 300 &lt; 400
- [scrapfly.ApiHttpServerError](https://scrapfly.github.io/python-scrapfly/scrapfly/index.html#scrapfly.ApiHttpServerError) Scrapfly API respond with &gt;= 500 &lt; 600
- [scrapfly.ScrapflyProxyError](https://scrapfly.github.io/python-scrapfly/scrapfly/index.html#scrapfly.ScrapflyProxyError) Error related to Proxy
- [scrapfly.ScrapflyThrottleError](https://scrapfly.github.io/python-scrapfly/scrapfly/index.html#scrapfly.ScrapflyThrottleError) Error related to Throttle
- [scrapfly.ScrapflyAspError](https://scrapfly.github.io/python-scrapfly/scrapfly/index.html#scrapfly.ScrapflyAspError) Error related to ASP
- [scrapfly.ScrapflyScheduleError](https://scrapfly.github.io/python-scrapfly/scrapfly/index.html#scrapfly.ScrapflyScheduleError) Error related to Schedule
- [scrapfly.ScrapflyWebhookError](https://scrapfly.github.io/python-scrapfly/scrapfly/index.html#scrapfly.ScrapflyWebhookError) Error related to Webhook
- [scrapfly.ScrapflySessionError](https://scrapfly.github.io/python-scrapfly/scrapfly/index.html#scrapfly.ScrapflySessionError) Error related to Session
- [scrapfly.TooManyConcurrentRequest](https://scrapfly.github.io/python-scrapfly/scrapfly/index.html#scrapfly.TooManyConcurrentRequest) Maximum of concurrent request allowed by your plan reached
- [scrapfly.QuotaLimitReached](https://scrapfly.github.io/python-scrapfly/scrapfly/index.html#scrapfly.QuotaLimitReached) Quota Limit of your plan or project reached
 
 ```
error.message              # Message
error.code                 # Error code of error
error.retry_delay         # Recommended time wait before retrying if retryable
error.retry_times         # Recommended retry times if retryable
error.resource            # Related resource, Proxy, ASP, Webhook, Spider
error.is_retryable        # True or False
error.documentation_url   # Documentation explaining the error in details
error.api_response        # Api Response object
error.http_status_code    # Http code
```

 

   

 

 By default, if the upstream website that you scrape responds with bad HTTP code, the SDK will raise `UpstreamHttpClientError` or `UpstreamHttpServerError` regarding the HTTP status code. You can disable this behavior by setting the **raise\_on\_upstream\_error** attribute to false. `ScrapeConfig(raise_on_upstream_error=False)`

 If you want to report to your app for monitoring / tracking purpose on your side, checkout [reporter](https://scrapfly.io/docs/onboarding#reporter) feature.

## Account

You can retrieve account information

 ```
from scrapfly import ScrapflyClient

scrapfly = ScrapflyClient(key='{{ YOUR_API_KEY }}')
print(scrapfly.client.account())
```

 

   

 

## Keep Alive HTTP Session

Take benefits of `Keep-Alive` Connection

 ```
from scrapfly import ScrapeConfig, ScrapflyClient, ScrapeApiResponse

scrapfly = ScrapflyClient(key='{{ YOUR_API_KEY }}')

with scrapfly as client:
    api_response:ScrapeApiResponse = scrapfly.scrape(scrape_config=ScrapeConfig(
        url='https://news.ycombinator.com/',
        render_js=True,
        screenshots={
            'main': 'fullpage'
        }
    ))
    # more scrape calls
```

 

   

 

## Concurrency out of the box

You can run scrape concurrently out of the box. We use `asyncio` for that.

 In python, there are many ways to achieve concurrency. You can also check:

- [ProcessPoolExecutor](https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ProcessPoolExecutor)
- [ThreadPoolExecutor](https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ThreadPoolExecutor)
 
First of all, ensure you have installed concurrency module

 ```
pip install 'scrapfly-sdk[concurrency]'
```

 

   

 

 
 ```
import asyncio

import logging as logger
from sys import stdout

scrapfly_logger = logger.getLogger('scrapfly')
scrapfly_logger.setLevel(logger.DEBUG)
logger.StreamHandler(stdout)

from scrapfly import ScrapeConfig, ScrapflyClient, ScrapeApiResponse

scrapfly = ScrapflyClient(key='{{ YOUR_API_KEY }}', max_concurrency=2)

async def main():
    targets = [
        ScrapeConfig(url='https://httpbin.dev/anything', render_js=True),
        ScrapeConfig(url='https://httpbin.dev/anything', render_js=True),
        ScrapeConfig(url='https://httpbin.dev/anything', render_js=True),
        ScrapeConfig(url='https://httpbin.dev/anything', render_js=True),
        ScrapeConfig(url='https://httpbin.dev/anything', render_js=True),
        ScrapeConfig(url='https://httpbin.dev/anything', render_js=True),
        ScrapeConfig(url='https://httpbin.dev/anything', render_js=True),
        ScrapeConfig(url='https://httpbin.dev/anything', render_js=True)
    ]
    async for result in scrapfly.concurrent_scrape(scrape_configs=targets):
        print(result)

asyncio.run(main())

```

 

   

 

## Webhook Server

 The **Scrapfly Python SDK** offers a built-in webhook server feature, allowing developers to easily set up and handle webhooks for receiving notifications and data from Scrapfly services. This documentation provides an overview of the create\_server function within the SDK, along with an example of its usage.

### Example Usage

> In order to expose the local server to internet we use [ngrok](https://ngrok.com/) and you need a free account to run the example.

Below is an example demonstrating how to use the create\_server function to set up a webhook server:

1. Install dependencies: `pip install ngrok flask scrapfly`
2. Export your ngrok auth token in your terminal: `export NGROK_AUTHTOKEN=MY_NGROK_TOKEN`
3. Create a webhook on your [Scrapfly dashboard](https://scrapfly.io/dashboard/webhook) with any endpoint (For example from [https://webhook.site](https://webhook.site/)). Since Ngrok endpoint is only known at runtime only and random on each run, we will edit the endpoint once ngrok advertised it in the next step.
4. Retrieve your webhook signing secret
5. Run the command `python webhook_server.py --signing-secret=MY_SIGNING_SECRET`
6. Once the server is running, copy the exposed url advertised below the log line `"====== LISTENING ON ======"`
7. [Edit your webhook](https://scrapfly.io/dashboard/webhook) url and replace it by the advertised url
 
> With ngrok free plan, on each start of the server, a new random tunnel url is assigned, you need edit the webhook

 ```
import argparse
from typing import Dict
import flask
import ngrok
from scrapfly import webhook
from scrapfly.webhook import ResourceType

# Define the webhook callback function
def webhook_callback(data: Dict, resource_type: ResourceType, request: flask.Request):
    if resource_type == ResourceType.SCRAPE.value:
        # Process scrape result
        upstream_response = data['result']
        print(upstream_response)
    else:
        # Process other resource types
        print(data)

# Set up ngrok listener for tunneling
listener = ngrok.werkzeug_develop()

# Parse command-line arguments
parser = argparse.ArgumentParser(description="Webhook server with signing secret")
parser.add_argument("--signing-secret", required=True, help="Signing secret to verify webhook payload integrity")
args = parser.parse_args()

# Create Flask application and set up webhook server
app = flask.Flask("Scrapfly Webhook Server")
webhook.create_server(signing_secrets=(args.signing_secret,), callback=webhook_callback, app=app)

# Start the server and print the webhook endpoint URL
print("====== LISTENING ON ======")
print(listener.url() + "/webhook")
print("==========================")
app.run()


```

 

   

 

 In this example, the webhook server is set up using create\_server, with a callback function webhook\_callback defined to handle incoming webhook payloads. The signing secret is provided as a command-line argument, and ngrok is used for exposing the local server to the internet for testing.

## Screenshot API

 The Screenshot API captures full-page or viewport screenshots with headless browsers. It supports custom resolution, format, capture region, rendering options, caching and webhooks.

> See the [Screenshot API documentation](https://scrapfly.io/docs/screenshot-api/getting-started) for the full parameter reference.

### Basic Screenshot

 ```
from scrapfly import ScrapflyClient, ScreenshotConfig

client = ScrapflyClient(key="{{ YOUR_API_KEY }}")
result = client.screenshot(ScreenshotConfig(
    url="https://web-scraping.dev/",
    format="jpg",
    capture="fullpage",
))
with open("screenshot.jpg", "wb") as f:
    f.write(result.image)
```

 

   

 

### Screenshot with Options

Control quality, resolution, rendering wait, dark mode and more:

 ```
from scrapfly import ScrapflyClient, ScreenshotConfig

client = ScrapflyClient(key="{{ YOUR_API_KEY }}")
result = client.screenshot(ScreenshotConfig(
    url="https://web-scraping.dev/",
    format="png",
    capture="fullpage",
    resolution="1440x900",   # tablet: "768x1024", mobile: "375x812"
    rendering_wait=2000,     # wait 2s after page load
    options=["dark_mode", "block_banners"],
    country="us",
))
with open("screenshot.png", "wb") as f:
    f.write(result.image)
```

 

   

 

## Extraction API

 The Extraction API parses HTML/text and extracts structured data using templates, predefined AI models, or free-form LLM prompts.

> See the [Extraction API documentation](https://scrapfly.io/docs/extraction-api/getting-started) for all extraction models and template syntax.

### Predefined AI Model

Extract product data using a built-in model — no template required:

 ```
from scrapfly import ScrapflyClient, ExtractionConfig

client = ScrapflyClient(key="{{ YOUR_API_KEY }}")
result = client.extract(ExtractionConfig(
    body="...<h1>Orange Chocolate Box</h1><span class="price">$9.99</span>...",
    content_type="text/html",
    extraction_model="product",   # or: product_listing, article, review_list, ...
))
print(result.data)
```

 

   

 

### LLM Free-Form Prompt

 ```
from scrapfly import ScrapflyClient, ExtractionConfig

client = ScrapflyClient(key="{{ YOUR_API_KEY }}")
result = client.extract(ExtractionConfig(
    body="...<p>The GPU operates at 2.5 GHz with 24 GB VRAM...</p>...",
    content_type="text/html",
    extraction_prompt="Extract GPU name, clock speed and VRAM in GB as JSON",
))
print(result.data)
```

 

   

 

### Named Template

 ```
from scrapfly import ScrapflyClient, ExtractionConfig

client = ScrapflyClient(key="{{ YOUR_API_KEY }}")
# Use a saved extraction template by name
result = client.extract(ExtractionConfig(
    body="...<span itemprop="price">$9.99</span>...",
    content_type="text/html",
    extraction_template="my-product-template",
))
print(result.data)
```

 

   

 

## Crawler API

 The Crawler API recursively crawls a website starting from a seed URL. It handles URL discovery, deduplication, rate limiting, robots.txt compliance, sitemap parsing, content extraction and webhook callbacks.

 The public test target [web-scraping.dev](https://web-scraping.dev) is used in the examples below - it accepts automated crawls.

> See the [Crawler API documentation](https://scrapfly.io/docs/crawler-api/getting-started) for the full parameter reference and webhook payload examples.

### Basic Crawl

 ```
from scrapfly import ScrapflyClient, CrawlerConfig, Crawl

client = ScrapflyClient(key="{{ YOUR_API_KEY }}")
crawl = Crawl(
    client,
    CrawlerConfig(
        url="https://web-scraping.dev/products",
        page_limit=10,
        content_formats=["markdown"],
    ),
)
crawl.crawl()
crawl.wait()

status = crawl.status()
print(f"Visited {status.state.urls_visited} pages")
```

 

   

 

### Crawl with Compliance Options

Control robots.txt respect, nofollow handling, and subdomain following:

 ```
from scrapfly import ScrapflyClient, CrawlerConfig, Crawl

client = ScrapflyClient(key="{{ YOUR_API_KEY }}")
crawl = Crawl(
    client,
    CrawlerConfig(
        url="https://web-scraping.dev/",
        page_limit=50,
        respect_robots_txt=True,
        ignore_no_follow=False,     # honour rel=nofollow links
        follow_internal_subdomains=False,
        content_formats=["markdown", "page_metadata"],
    ),
)
crawl.crawl()
crawl.wait()
```

 

   

 

### Crawl with Sitemap Discovery

 ```
from scrapfly import ScrapflyClient, CrawlerConfig, Crawl

client = ScrapflyClient(key="{{ YOUR_API_KEY }}")
crawl = Crawl(
    client,
    CrawlerConfig(
        url="https://web-scraping.dev/",
        use_sitemaps=True,          # discover URLs from sitemap.xml
        page_limit=100,
        max_depth=3,
        content_formats=["html", "markdown"],
    ),
)
crawl.crawl()
crawl.wait()
print(f"Visited {crawl.status().state.urls_visited} pages")
```

 

   

 

### Crawl with Webhooks

Receive real-time events as the crawler visits, discovers and finishes URLs:

 ```
from scrapfly import ScrapflyClient, CrawlerConfig, Crawl

client = ScrapflyClient(key="{{ YOUR_API_KEY }}")
crawl = Crawl(
    client,
    CrawlerConfig(
        url="https://web-scraping.dev/products",
        page_limit=50,
        # Replace with the name of a webhook you registered in your dashboard
        # at https://scrapfly.io/dashboard/webhook
        webhook_name="your-webhook-name",
        webhook_events=[
            "crawler_url_visited",
            "crawler_finished",
        ],
        content_formats=["markdown"],
    ),
)
crawl.crawl()
# webhook receives events — no need to poll if you just need events
```

 

   

 

### List Crawled URLs

 Stream the list of URLs the crawler visited, skipped or failed on. The endpoint is paginated; iterate by incrementing `page` until the response is empty.

 ```
from scrapfly import ScrapflyClient, CrawlerConfig, Crawl

client = ScrapflyClient(key="{{ YOUR_API_KEY }}")
crawl = Crawl(client, CrawlerConfig(url="https://web-scraping.dev/products", page_limit=10))
crawl.crawl()
crawl.wait()

# Stream visited URLs (default status filter is 'visited')
visited = crawl.urls(status="visited", page=1, per_page=100)
for entry in visited:
    print(entry.url)

# Failed URLs include the reason as a CSV-style suffix
for entry in crawl.urls(status="failed"):
    print(entry.url, "->", entry.reason)
```

 

   

 

### Read a Single Page's Content

 Use `Crawl.read()` to fetch one page in plain mode (no JSON envelope) — the returned `CrawlContent` wraps the raw bytes plus the originating URL. Returns `None` if the URL was not part of this crawl.

 ```
from scrapfly import ScrapflyClient, CrawlerConfig, Crawl

client = ScrapflyClient(key="{{ YOUR_API_KEY }}")
crawl = Crawl(client, CrawlerConfig(url="https://web-scraping.dev/products", page_limit=5,
                                    content_formats=["markdown"]))
crawl.crawl()
crawl.wait()

content = crawl.read("https://web-scraping.dev/products", format="markdown")
if content is not None:
    print(content.content[:200])

# For multiple URLs in one round-trip:
batch = crawl.read_batch(
    urls=["https://web-scraping.dev/products", "https://web-scraping.dev/product/1"],
    formats=["markdown"],
)
for url, formats in batch.items():
    print(url, "->", len(formats["markdown"]), "chars")
```

 

   

 

### Download WARC and HAR Artifacts

 WARC archives every HTTP exchange (request + response + body) as it happened on the wire. HAR captures network timings, headers and the response body in a JSON-friendly format. The Python SDK ships `WarcParser` and `HarArchive` so you don't need a third-party library.

 ```
from scrapfly import ScrapflyClient, CrawlerConfig, Crawl

client = ScrapflyClient(key="{{ YOUR_API_KEY }}")
crawl = Crawl(client, CrawlerConfig(url="https://web-scraping.dev/products", page_limit=10))
crawl.crawl()
crawl.wait()

# WARC: iterate response records — content, headers, status code, URL
warc = crawl.warc()
for record in warc.iter_responses():
    print(record.status_code, record.url, len(record.content), "bytes")
warc.save("crawl.warc.gz")

# HAR: high-level filters for status / content-type / URL.
# `crawl.har()` returns a CrawlerArtifactResponse — its `.parser` is a HarArchive.
har = crawl.har()
for entry in har.parser.filter_by_status(200):
    print(entry.method, entry.url, entry.content_type)
```

 

   

 

### Cancel a Running Crawl

 Stop a crawler before it reaches its natural end (e.g. on a runaway crawl, a budget cap, or user navigation away). The status will transition to `CANCELLED` with `state.stop_reason="user_cancelled"`.

 ```
from scrapfly import ScrapflyClient, CrawlerConfig, Crawl

client = ScrapflyClient(key="{{ YOUR_API_KEY }}")
crawl = Crawl(client, CrawlerConfig(url="https://web-scraping.dev/products", page_limit=1000))
crawl.crawl()

# ... later, from another worker / signal handler / UI ...
crawl.cancel()

# Pass allow_cancelled=True so wait() returns normally on the cancellation
# we just triggered ourselves, instead of raising ScrapflyCrawlerError.
crawl.wait(allow_cancelled=True)

status = crawl.status()
assert status.is_cancelled
print(f"stop_reason={status.state.stop_reason}")
```

 

   

 

### Handle Webhook Events

 Use `webhook_from_payload()` to parse incoming webhook bodies into typed dataclasses. The four lifecycle events (started/stopped/cancelled/finished) share `CrawlerLifecycleWebhook`; the four URL events have their own classes. Field names match the wire format and the scrape-engine source of truth.

 ```
from flask import Flask, request
from scrapfly import (
    webhook_from_payload,
    CrawlerLifecycleWebhook,
    CrawlerUrlVisitedWebhook,
    CrawlerUrlFailedWebhook,
    CrawlerWebhookEvent,
)

app = Flask(__name__)
SIGNING_SECRETS = ("your-hex-secret",)

@app.route("/webhook", methods=["POST"])
def crawler_webhook():
    wh = webhook_from_payload(
        request.json,
        signing_secrets=SIGNING_SECRETS,
        signature=request.headers.get("X-Scrapfly-Webhook-Signature"),
    )

    # Common fields on every event
    print(f"[{wh.event}] {wh.crawler_uuid} "
          f"visited={wh.state.urls_visited}/{wh.state.urls_extracted}")

    if isinstance(wh, CrawlerLifecycleWebhook):
        if wh.event == CrawlerWebhookEvent.CRAWLER_FINISHED.value:
            print(f"  finished — credits={wh.state.api_credit_used}")
    elif isinstance(wh, CrawlerUrlVisitedWebhook):
        print(f"  visited {wh.url} [{wh.scrape.status_code}]")
    elif isinstance(wh, CrawlerUrlFailedWebhook):
        print(f"  failed {wh.url}: {wh.error}")

    return "", 200
```

 

   

 

## Cloud Browser API

 The Cloud Browser API provides a fully managed remote browser that bypasses anti-bot protection (Cloudflare, DataDome, Imperva, etc.) and hands you a live Playwright/Puppeteer-compatible WebSocket connection.

Install the extra dependency first:

 ```
pip install 'scrapfly-sdk[all]' playwright && playwright install chromium
```

 

   

 

### Basic Session

 ```
from scrapfly import ScrapflyClient
from playwright.sync_api import sync_playwright

client = ScrapflyClient(key="{{ YOUR_API_KEY }}")

result = client.cloud_browser_unblock(
    url="https://web-scraping.dev/product/1",
)
session_id = result["session_id"]

with sync_playwright() as p:
    browser = p.chromium.connect_over_cdp(result["ws_url"])
    page = browser.contexts[0].pages[0]
    print("Title:", page.title())
    print("URL:", page.url)
    browser.close()
```

 

   

 

### Session with Country

 ```
from scrapfly import ScrapflyClient
from playwright.sync_api import sync_playwright

client = ScrapflyClient(key="{{ YOUR_API_KEY }}")

result = client.cloud_browser_unblock(
    url="https://web-scraping.dev/product/1",
    country="us",
)

with sync_playwright() as p:
    browser = p.chromium.connect_over_cdp(result["ws_url"])
    page = browser.contexts[0].pages[0]

    # Navigate within the same session
    page.goto("https://web-scraping.dev/products")
    page.wait_for_selector(".product")
    products = page.query_selector_all(".product-name")
    for product in products:
        print(product.inner_text())
    browser.close()
```

 

   

 

## External Integration

### LlamaIndex

 LlamaIndex, formerly known as GPT Index, is a data framework designed to facilitate the connection between large language models (LLMs) and a wide variety of data sources. It provides tools to effectively ingest, index, and query data within these models.

 [ Integrate Scrapfly with LlamaIndex ](https://docs.llamaindex.ai/en/stable/examples/data_connectors/WebPageDemo/?h=scrap#using-scrapfly)### Langchain

 LangChain is a robust framework designed for developing applications powered by language models. It focuses on enabling the creation of applications that can leverage the capabilities of large language models (LLMs) for a variety of use cases.

 [ Integrate Scrapfly with Langchain ](https://python.langchain.com/v0.2/docs/integrations/document_loaders/scrapfly/#scrapfly)