# Scrapfly Documentation

## Table of Contents

### Dashboard

- [Intro](https://scrapfly.io/docs)
- [Project](https://scrapfly.io/docs/project)
- [Account](https://scrapfly.io/docs/account)
- [Workspace & Team](https://scrapfly.io/docs/workspace-and-team)
- [Billing](https://scrapfly.io/docs/billing)

### Products

#### MCP Server

- [Getting Started](https://scrapfly.io/docs/mcp/getting-started)
- [Tools & API Spec](https://scrapfly.io/docs/mcp/tools)
- [Authentication](https://scrapfly.io/docs/mcp/authentication)
- [Examples & Use Cases](https://scrapfly.io/docs/mcp/examples)
- [FAQ](https://scrapfly.io/docs/mcp/faq)
##### Integrations

- [Overview](https://scrapfly.io/docs/mcp/integrations)
- [Claude Desktop](https://scrapfly.io/docs/mcp/integrations/claude-desktop)
- [Claude Code](https://scrapfly.io/docs/mcp/integrations/claude-code)
- [ChatGPT](https://scrapfly.io/docs/mcp/integrations/chatgpt)
- [Cursor](https://scrapfly.io/docs/mcp/integrations/cursor)
- [Cline](https://scrapfly.io/docs/mcp/integrations/cline)
- [Windsurf](https://scrapfly.io/docs/mcp/integrations/windsurf)
- [Zed](https://scrapfly.io/docs/mcp/integrations/zed)
- [Roo Code](https://scrapfly.io/docs/mcp/integrations/roo-code)
- [VS Code](https://scrapfly.io/docs/mcp/integrations/vscode)
- [LangChain](https://scrapfly.io/docs/mcp/integrations/langchain)
- [LlamaIndex](https://scrapfly.io/docs/mcp/integrations/llamaindex)
- [CrewAI](https://scrapfly.io/docs/mcp/integrations/crewai)
- [OpenAI](https://scrapfly.io/docs/mcp/integrations/openai)
- [n8n](https://scrapfly.io/docs/mcp/integrations/n8n)
- [Make](https://scrapfly.io/docs/mcp/integrations/make)
- [Zapier](https://scrapfly.io/docs/mcp/integrations/zapier)
- [Vapi AI](https://scrapfly.io/docs/mcp/integrations/vapi)
- [Agent Builder](https://scrapfly.io/docs/mcp/integrations/agent-builder)
- [Custom Client](https://scrapfly.io/docs/mcp/integrations/custom-client)


#### Web Scraping API

- [Getting Started](https://scrapfly.io/docs/scrape-api/getting-started)
- [API Specification]()
- [Monitoring](https://scrapfly.io/docs/monitoring)
- [Customize Request](https://scrapfly.io/docs/scrape-api/custom)
- [Debug](https://scrapfly.io/docs/scrape-api/debug)
- [Anti Scraping Protection](https://scrapfly.io/docs/scrape-api/anti-scraping-protection)
- [Proxy](https://scrapfly.io/docs/scrape-api/proxy)
- [Proxy Mode](https://scrapfly.io/docs/scrape-api/proxy-mode)
- [Proxy Mode - Screaming Frog](https://scrapfly.io/docs/scrape-api/proxy-mode/screaming-frog)
- [Proxy Mode - Apify](https://scrapfly.io/docs/scrape-api/proxy-mode/apify)
- [(Auto) Data Extraction](https://scrapfly.io/docs/scrape-api/extraction)
- [Javascript Rendering](https://scrapfly.io/docs/scrape-api/javascript-rendering)
- [Javascript Scenario](https://scrapfly.io/docs/scrape-api/javascript-scenario)
- [SSL](https://scrapfly.io/docs/scrape-api/ssl)
- [DNS](https://scrapfly.io/docs/scrape-api/dns)
- [Cache](https://scrapfly.io/docs/scrape-api/cache)
- [Batch (Multi-URL Scraping)](https://scrapfly.io/docs/scrape-api/batch)
- [Session](https://scrapfly.io/docs/scrape-api/session)
- [Webhook](https://scrapfly.io/docs/scrape-api/webhook)
- [Schedule](https://scrapfly.io/docs/scrape-api/schedule)
- [Screenshot](https://scrapfly.io/docs/scrape-api/screenshot)
- [Errors](https://scrapfly.io/docs/scrape-api/errors)
- [Timeout](https://scrapfly.io/docs/scrape-api/understand-timeout)
- [Throttling](https://scrapfly.io/docs/throttling)
- [Troubleshoot](https://scrapfly.io/docs/scrape-api/troubleshoot)
- [Billing](https://scrapfly.io/docs/scrape-api/billing)
- [FAQ](https://scrapfly.io/docs/scrape-api/faq)

#### Crawler API

- [Getting Started](https://scrapfly.io/docs/crawler-api/getting-started)
- [API Specification]()
- [Retrieving Results](https://scrapfly.io/docs/crawler-api/results)
- [WARC Format](https://scrapfly.io/docs/crawler-api/warc-format)
- [Data Extraction](https://scrapfly.io/docs/crawler-api/extraction-rules)
- [Webhook](https://scrapfly.io/docs/crawler-api/webhook)
- [Schedule](https://scrapfly.io/docs/crawler-api/schedule)
- [Billing](https://scrapfly.io/docs/crawler-api/billing)
- [Errors](https://scrapfly.io/docs/crawler-api/errors)
- [Troubleshoot](https://scrapfly.io/docs/crawler-api/troubleshoot)
- [FAQ](https://scrapfly.io/docs/crawler-api/faq)

#### Screenshot API

- [Getting Started](https://scrapfly.io/docs/screenshot-api/getting-started)
- [API Specification]()
- [Accessibility Testing](https://scrapfly.io/docs/screenshot-api/accessibility)
- [Webhook](https://scrapfly.io/docs/screenshot-api/webhook)
- [Schedule](https://scrapfly.io/docs/screenshot-api/schedule)
- [Billing](https://scrapfly.io/docs/screenshot-api/billing)
- [Errors](https://scrapfly.io/docs/screenshot-api/errors)

#### Extraction API

- [Getting Started](https://scrapfly.io/docs/extraction-api/getting-started)
- [API Specification]()
- [Rules Template](https://scrapfly.io/docs/extraction-api/rules-and-template)
- [LLM Extraction](https://scrapfly.io/docs/extraction-api/llm-prompt)
- [AI Auto Extraction](https://scrapfly.io/docs/extraction-api/automatic-ai)
- [Webhook](https://scrapfly.io/docs/extraction-api/webhook)
- [Billing](https://scrapfly.io/docs/extraction-api/billing)
- [Errors](https://scrapfly.io/docs/extraction-api/errors)
- [FAQ](https://scrapfly.io/docs/extraction-api/faq)

#### Proxy Saver

- [Getting Started](https://scrapfly.io/docs/proxy-saver/getting-started)
- [Fingerprints](https://scrapfly.io/docs/proxy-saver/fingerprints)
- [Optimizations](https://scrapfly.io/docs/proxy-saver/optimizations)
- [SSL Certificates](https://scrapfly.io/docs/proxy-saver/certificates)
- [Protocols](https://scrapfly.io/docs/proxy-saver/protocols)
- [Pacfile](https://scrapfly.io/docs/proxy-saver/pacfile)
- [Secure Credentials](https://scrapfly.io/docs/proxy-saver/security)
- [Billing](https://scrapfly.io/docs/proxy-saver/billing)

#### Cloud Browser API

- [Getting Started](https://scrapfly.io/docs/cloud-browser-api/getting-started)
- [Proxy & Geo-Targeting](https://scrapfly.io/docs/cloud-browser-api/proxy)
- [Unblock API](https://scrapfly.io/docs/cloud-browser-api/unblock)
- [Captcha Solver](https://scrapfly.io/docs/cloud-browser-api/captcha-solver)
- [File Downloads](https://scrapfly.io/docs/cloud-browser-api/file-downloads)
- [Session Resume](https://scrapfly.io/docs/cloud-browser-api/session-resume)
- [Human-in-the-Loop](https://scrapfly.io/docs/cloud-browser-api/human-in-the-loop)
- [Debug Mode](https://scrapfly.io/docs/cloud-browser-api/debug-mode)
- [Bring Your Own Proxy](https://scrapfly.io/docs/cloud-browser-api/bring-your-own-proxy)
- [Browser Extensions](https://scrapfly.io/docs/cloud-browser-api/extensions)
- [Native Browser MCP](https://scrapfly.io/docs/cloud-browser-api/mcp)
- [DevTools Protocol](https://scrapfly.io/docs/cloud-browser-api/cdp-reference)
##### Integrations

- [Puppeteer](https://scrapfly.io/docs/cloud-browser-api/puppeteer)
- [Playwright](https://scrapfly.io/docs/cloud-browser-api/playwright)
- [Selenium](https://scrapfly.io/docs/cloud-browser-api/selenium)
- [Vercel Agent Browser](https://scrapfly.io/docs/cloud-browser-api/agent-browser)
- [Browser Use](https://scrapfly.io/docs/cloud-browser-api/browser-use)
- [Stagehand](https://scrapfly.io/docs/cloud-browser-api/stagehand)

- [Billing](https://scrapfly.io/docs/cloud-browser-api/billing)
- [Errors](https://scrapfly.io/docs/cloud-browser-api/errors)


### Tools

- [Antibot Detector](https://scrapfly.io/docs/tools/antibot-detector)

### SDK

- [Golang](https://scrapfly.io/docs/sdk/golang)
- [Python](https://scrapfly.io/docs/sdk/python)
- [Rust](https://scrapfly.io/docs/sdk/rust)
- [TypeScript](https://scrapfly.io/docs/sdk/typescript)
- [Scrapy](https://scrapfly.io/docs/sdk/scrapy)

### Integrations

- [Getting Started](https://scrapfly.io/docs/integration/getting-started)
- [LangChain](https://scrapfly.io/docs/integration/langchain)
- [LlamaIndex](https://scrapfly.io/docs/integration/llamaindex)
- [CrewAI](https://scrapfly.io/docs/integration/crewai)
- [Zapier](https://scrapfly.io/docs/integration/zapier)
- [Make](https://scrapfly.io/docs/integration/make)
- [n8n](https://scrapfly.io/docs/integration/n8n)

### Academy

- [Overview](https://scrapfly.io/academy)
- [Web Scraping Overview](https://scrapfly.io/academy/scraping-overview)
- [Tools](https://scrapfly.io/academy/tools-overview)
- [Reverse Engineering](https://scrapfly.io/academy/reverse-engineering)
- [Static Scraping](https://scrapfly.io/academy/static-scraping)
- [HTML Parsing](https://scrapfly.io/academy/html-parsing)
- [Dynamic Scraping](https://scrapfly.io/academy/dynamic-scraping)
- [Hidden API Scraping](https://scrapfly.io/academy/hidden-api-scraping)
- [Headless Browsers](https://scrapfly.io/academy/headless-browsers)
- [Hidden Web Data](https://scrapfly.io/academy/hidden-web-data)
- [JSON Parsing](https://scrapfly.io/academy/json-parsing)
- [Data Processing](https://scrapfly.io/academy/data-processing)
- [Scaling](https://scrapfly.io/academy/scaling)
- [Walkthrough Summary](https://scrapfly.io/academy/walkthrough-summary)
- [Scraper Blocking](https://scrapfly.io/academy/scraper-blocking)
- [Proxies](https://scrapfly.io/academy/proxies)

---

# Webhook

 [  View as markdown ](https://scrapfly.io/?view=markdown)   Copy for LLM    Copy for LLM  [     Open in ChatGPT ](https://chatgpt.com/?hints=search&prompt=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fextraction-api%2Fwebhook%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Claude ](https://claude.ai/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fextraction-api%2Fwebhook%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Perplexity ](https://www.perplexity.ai/search/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fextraction-api%2Fwebhook%20so%20I%20can%20ask%20questions%20about%20it.) 

 

 

 Scrapfly's [webhook](https://scrapfly.io/docs/extraction-api/getting-started#api_param_webhook_name) feature is ideal for managing long-running extraction tasks asynchronously. When webhook is specified through the `webhook_name` parameter, Scrapfly will call your HTTP endpoint with the extraction response.

 To start using webhooks first one must be created using [webhook web interface](https://scrapfly.io/dashboard/webhook).

    

  

 webhook management page  The body sent to your endpoint is the same as a regular API Extract response, plus webhook information in the context part.

    

  

 webhook status report on monitoring log page > **Webhook Queue Size** The webhook queue size indicates the maximum number of queued webhooks that can be scheduled. After the extraction process is completed and your application is notified, the queue size is reduced. This allows you to schedule additional extraction beyond the concurrency limit of your subscription. The scheduler will handle this and ensure that your concurrency limit is met.
> 
>  | ###### FREE   $0.00/mo | ###### DISCOVERY   $30.00/mo | ###### PRO   $100.00/mo | ###### STARTUP   $250.00/mo | ###### ENTERPRISE   $500.00/mo |
> |---|---|---|---|---|
> | 0 | 500 | 2,000 | 5,000 | 10,000 |

 [See in Your Dashboard](https://scrapfly.io/dashboard/webhook)

## Scope

 Webhooks are scoped per scrapfly [projects](https://scrapfly.io/docs/project) and environments. Make sure to create a webhook for each of your project and environment (test/live).

## Usage

> Webhook can be used for multiple purpose, in context of Extraction API, to assert you received an extraction, you must check the header `X-Scrapfly-Webhook-Resource-Type` and check the value is `extraction`

 To enable webhook callbacks, all you need to do is specify the `webhook_name` parameter in your extract requests. Then, Scrapfly will immediately return a `201` and accept your request, then the scheduler will automatically call your webhook URL with the extraction result.

 Note that your webhook has to be configured to respond to `2xx` response code for webhook to be considered a success. The `3xx` redirect responses will be followed and response codes `4xx` and `5xx` are considered failures and will be retried as per the retry policy.

> The below examples assume you have a webhook named **my-webhook** registered. You can create a webhook named "example" via the [web dashboard](https://scrapfly.io/dashboard/webhook).

 ```
curl -X POST \
-H "content-type: text/html" \
"https://api.scrapfly.io/extraction?key=&url=https%3A%2F%2Fweb-scraping.dev&extraction_prompt=Extract%20the%20product%20specification%20in%20json%20format&webhook_name=my-webhook" \
-d @product.html

```

 

   

 

 

#### Example Of Response

 ```
{
  "job_uuid": "7a3aa96d-fb0e-4c45-9b01-7c42f295dcac",
  "success": true,
  "webhook_name": "my-webhook",
  "webhook_queue_limit": 10000,
  "webhook_queued_element": 7,
  "webhook_uuid": "d7131802-1eba-4cc4-a6fd-5da6c8cf1f35"
}

```

 

   

 

#### Tracking

 When you enqueue a scrape, you receive a unique job uuid `job_uuid`, when your webhook will be notified, you will retrieve the processed job id on the response header `X-Scrapfly-Webhook-Job-Id` to reconcile it in your system and track it.

## Retry Policy

 Webhook callbacks are retried if Scrapfly can't notify the endpoint specified in your webhook settings based on this retry policy:

- 30 seconds
- 1 minute
- 5 minutes
- 30 minutes
- 1 hour
- 1 day
 
> If we failed to reach your application more than 100 times in a row, the system automatically disables it, and you will be notified. You can re-enable it from the UI at any point after.

## Development

 Useful tools to develop locally :

- <https://webhook.site> Collect and display webhook
- <https://ngrok.com> Expose you local application through a secured tunnel to the internet
- <https://console.hookdeck.com> Inspect, replay, and forward webhooks to your local application (combines the previous two)
 
## Security

 Webhooks are signed using HMAC (Hash-based Message Authentication Code) with the SHA-256 algorithm to ensure the integrity of the webhook content and verify its authenticity. This mechanism helps prevent tampering and ensures that webhook payloads are from trusted sources.

#### HMAC Overview

 HMAC is a cryptographic technique that combines a secret key with a hash function (in this case, SHA-256) to produce a fixed-size hash value known as the HMAC digest. This digest is unique to both the original message and the secret key, providing a secure way to verify the integrity and authenticity of the message.

#### Signature in HTTP Header

 When sending a webhook request, a signature is generated using HMAC-SHA256 and included in the HTTP header `X-Scrapfly-Webhook-Signature`. This signature is computed based on the webhook payload and a secret key known only to the sender and receiver.

#### Verification Process

 Upon receiving a webhook request, the receiver extracts the payload and computes its own HMAC-SHA256 signature using the same secret key. It then compares this computed signature with the signature provided in the X-Scrapfly-Webhook-Signature header. If the two signatures match, it indicates that the payload has not been tampered with and originates from the expected sender.

 ```
import hmac
import hashlib

# Signing secret from your webhook settings (copy from the dashboard as-is)
secret_key = 'YOUR-WEBHOOK-SIGNING-SECRET'

# Raw request body bytes (HTTP stacks like Flask/FastAPI auto-decompress
# gzip/br for you; use those bytes directly, do NOT json.loads and
# re-serialize, that would change the byte sequence).
webhook_payload = b'{"data": "example"}'

# Compute HMAC-SHA256 signature. Scrapfly emits the digest as UPPERCASE hex.
computed_signature = hmac.new(
    secret_key.encode('utf-8'),
    webhook_payload,
    hashlib.sha256,
).hexdigest().upper()

# Compare with the signature from the X-Scrapfly-Webhook-Signature header
received_signature = '...'  # Extracted from X-Scrapfly-Webhook-Signature header
if hmac.compare_digest(computed_signature, received_signature):
    print("Signature verification successful. Payload is authentic.")
else:
    print("Signature verification failed. Payload may have been tampered with.")

```

 

   

 

#### Security Considerations

- **Keep Secret Key Secure:** The secret key used for HMAC computation should be kept confidential and not exposed publicly.
- **Use HTTPS:** Webhook communication should be conducted over HTTPS to ensure data privacy and integrity during transit.
- **Regular Key Rotation:** Periodically rotate the secret key used for HMAC computation to enhance security.
 
## Best Practices

#### Always verify the HMAC signature

 Compute the digest over the raw request body bytes (don't parse and re-serialize JSON, that changes the byte sequence) and compare with [`X-Scrapfly-Webhook-Signature`](#security) using a constant-time comparison. The same digest is also exposed lowercase as `X-Scrapfly-Webhook-Signature-Lowercase` for runtimes that lowercase headers.

#### Filter by resource type

 A single webhook URL can receive deliveries from multiple Scrapfly products. Check `X-Scrapfly-Webhook-Resource-Type` and only process when the value is `extraction`.

#### Acknowledge fast, process asynchronously

 Return `2xx` after verifying the signature and persisting the raw payload (queue, log, inbox table). Run database writes, downstream calls, and file downloads on a worker. Slow handlers cause retries.

> **Managed gateway option** A managed webhook gateway such as [Hookdeck](https://hookdeck.com) can sit between Scrapfly and your endpoint to handle buffering, retries, and replay. Useful when your consumer scales to zero or can't sustain peak event rates.

#### Make your handler idempotent

 Retries can deliver the same event twice. Use `X-Scrapfly-Webhook-Job-Id` as the dedup key.

#### Use HTTPS

 Land on HTTPS directly. Scrapfly follows `3xx` redirects, but redirecting from HTTPS to HTTP exposes the payload and signature.

#### Watch the auto-disable threshold

 Failures (`4xx`, `5xx`, timeouts) are retried per the [retry policy](#retry_policy). After 100 consecutive failures the webhook is disabled and has to be re-enabled from the dashboard. Alert on success rate before you hit that.

## Troubleshooting

 Delivery history, payloads, and response codes are in the [webhook dashboard](https://scrapfly.io/dashboard/webhook). Check there first.

#### Not receiving any webhooks

- The webhook may be disabled. After 100 consecutive failures Scrapfly disables it automatically, re-enable from the dashboard.
- The webhook must exist in the same [project and environment](#scope) (test/live) as the request that triggered it.
- `localhost` and private IPs are not reachable. Use a forwarding tool from the [Development](#development) section.
 
#### Signature verification fails

- Compute over the raw body bytes. Decoding and re-encoding JSON produces a different byte sequence.
- The digest is uppercase hex. Normalise both sides, or use `X-Scrapfly-Webhook-Signature-Lowercase`.
- Use the signing secret exactly as shown in the dashboard. Don't trim or base64-decode it.
- Reverse proxies, CDNs, or WAFs in front of your endpoint can rewrite the body or strip headers. Check there.
- Use a constant-time comparison: `hmac.compare_digest` in Python, `crypto.timingSafeEqual` in Node, `hmac.Equal` in Go.
 
#### Timeouts and retries

- Acknowledge with `2xx` after persisting the payload, then run heavy work on a worker.
- Cold starts (serverless, scale-to-zero) can blow the latency budget on the first delivery after idle. Keep a warm instance, or expect the first call to be retried.
- Return `2xx` only for actual success. `4xx` and `5xx` trigger retries per the [retry policy](#retry_policy).
 
> **Persistent retry storms** If you keep failing the retry budget or approaching the 100-failure auto-disable threshold, a managed gateway like [Hookdeck](https://hookdeck.com) between Scrapfly and your endpoint gives you durable queueing, a configurable retry policy, and dashboard replay. Scrapfly only sees the gateway's `2xx`.

#### Duplicate deliveries

 Retries can deliver the same event twice. Dedup on `X-Scrapfly-Webhook-Job-Id` and keep writes idempotent. Expected behaviour, not a bug.

#### Wrong event reaching the handler

 If a non-extraction payload (for example a Crawler API event) hits your extraction handler, your URL is serving multiple products. Branch on `X-Scrapfly-Webhook-Resource-Type` and only process `extraction`.

## Headers

Following headers are added :

- `X-Scrapfly-Webhook-Env` : Related environment where webhook is triggered
- `X-Scrapfly-Webhook-Project` : Related project name
- `X-Scrapfly-Webhook-Signature` : HMAC SHA-256 integrity signature, uppercase
- `X-Scrapfly-Webhook-Signature-Lowercase` : Same signature, lowercase. Provided because some platforms and managed webhook services (Hookdeck, AWS Lambda function URLs, certain edge runtimes) normalise header values to lowercase, which would otherwise break a strict string equality check against the uppercase variant. Either header is acceptable, pick the one that matches your runtime.
- `X-Scrapfly-Webhook-Id` : Unique webhook identifier
- `X-Scrapfly-Webhook-Name` : Name of the webhook
- `X-Scrapfly-Webhook-Resource-Type` : Resource type
- `X-Scrapfly-Webhook-Job-Id` : Unique Job Identifier given in the enqueue call
- `X-Scrapfly-Log-Uuid`, `X-Scrapfly-Log-Url` : Provided when the webhook delivery is associated with a Scrapfly log entry
 
## Related Errors

 All related errors are listed below. You can see the full description and example of the error response on [Errors section](https://scrapfly.io/docs/extraction-api/errors#webhook) of the documentation.

- [ERR::WEBHOOK::DISABLED](https://scrapfly.io/docs/extraction-api/error/ERR::WEBHOOK::DISABLED "Given webhook is disabled, please check out your webhook configuration for the current project / env") - Given webhook is disabled, please check out your webhook configuration for the current project / env
- [ERR::WEBHOOK::ENDPOINT\_UNREACHABLE](https://scrapfly.io/docs/extraction-api/error/ERR::WEBHOOK::ENDPOINT_UNREACHABLE "We were not able to contact your endpoint") - We were not able to contact your endpoint
- [ERR::WEBHOOK::QUEUE\_FULL](https://scrapfly.io/docs/extraction-api/error/ERR::WEBHOOK::QUEUE_FULL "You reach the maximum concurrency limit") - You reach the maximum concurrency limit
- [ERR::WEBHOOK::MAX\_RETRY](https://scrapfly.io/docs/extraction-api/error/ERR::WEBHOOK::MAX_RETRY "Maximum retry exceeded on your webhook") - Maximum retry exceeded on your webhook
- [ERR::WEBHOOK::NOT\_FOUND](https://scrapfly.io/docs/extraction-api/error/ERR::WEBHOOK::NOT_FOUND "Unable to find the given webhook for the current project / env") - Unable to find the given webhook for the current project / env
- [ERR::WEBHOOK::QUEUE\_FULL](https://scrapfly.io/docs/extraction-api/error/ERR::WEBHOOK::QUEUE_FULL "You reach the limit of scheduled webhook - You must wait pending webhook are processed") - You reach the limit of scheduled webhook - You must wait pending webhook are processed
 
## Pricing

 No additional fee applied on usage.

## Integration

- [Python SDK Built-in server and example](https://scrapfly.io/docs/sdk/python#webhook_server)