# Scrapfly Documentation

## Table of Contents

### Dashboard

- [Intro](https://scrapfly.io/docs)
- [Project](https://scrapfly.io/docs/project)
- [Account](https://scrapfly.io/docs/account)
- [Workspace & Team](https://scrapfly.io/docs/workspace-and-team)
- [Billing](https://scrapfly.io/docs/billing)

### Products

#### MCP Server

- [Getting Started](https://scrapfly.io/docs/mcp/getting-started)
- [Tools & API Spec](https://scrapfly.io/docs/mcp/tools)
- [Authentication](https://scrapfly.io/docs/mcp/authentication)
- [Examples & Use Cases](https://scrapfly.io/docs/mcp/examples)
- [FAQ](https://scrapfly.io/docs/mcp/faq)
##### Integrations

- [Overview](https://scrapfly.io/docs/mcp/integrations)
- [Claude Desktop](https://scrapfly.io/docs/mcp/integrations/claude-desktop)
- [Claude Code](https://scrapfly.io/docs/mcp/integrations/claude-code)
- [ChatGPT](https://scrapfly.io/docs/mcp/integrations/chatgpt)
- [Cursor](https://scrapfly.io/docs/mcp/integrations/cursor)
- [Cline](https://scrapfly.io/docs/mcp/integrations/cline)
- [Windsurf](https://scrapfly.io/docs/mcp/integrations/windsurf)
- [Zed](https://scrapfly.io/docs/mcp/integrations/zed)
- [Roo Code](https://scrapfly.io/docs/mcp/integrations/roo-code)
- [VS Code](https://scrapfly.io/docs/mcp/integrations/vscode)
- [LangChain](https://scrapfly.io/docs/mcp/integrations/langchain)
- [LlamaIndex](https://scrapfly.io/docs/mcp/integrations/llamaindex)
- [CrewAI](https://scrapfly.io/docs/mcp/integrations/crewai)
- [OpenAI](https://scrapfly.io/docs/mcp/integrations/openai)
- [n8n](https://scrapfly.io/docs/mcp/integrations/n8n)
- [Make](https://scrapfly.io/docs/mcp/integrations/make)
- [Zapier](https://scrapfly.io/docs/mcp/integrations/zapier)
- [Vapi AI](https://scrapfly.io/docs/mcp/integrations/vapi)
- [Agent Builder](https://scrapfly.io/docs/mcp/integrations/agent-builder)
- [Custom Client](https://scrapfly.io/docs/mcp/integrations/custom-client)


#### Web Scraping API

- [Getting Started](https://scrapfly.io/docs/scrape-api/getting-started)
- [API Specification]()
- [Monitoring](https://scrapfly.io/docs/monitoring)
- [Customize Request](https://scrapfly.io/docs/scrape-api/custom)
- [Debug](https://scrapfly.io/docs/scrape-api/debug)
- [Anti Scraping Protection](https://scrapfly.io/docs/scrape-api/anti-scraping-protection)
- [Proxy](https://scrapfly.io/docs/scrape-api/proxy)
- [Proxy Mode](https://scrapfly.io/docs/scrape-api/proxy-mode)
- [Proxy Mode - Screaming Frog](https://scrapfly.io/docs/scrape-api/proxy-mode/screaming-frog)
- [Proxy Mode - Apify](https://scrapfly.io/docs/scrape-api/proxy-mode/apify)
- [(Auto) Data Extraction](https://scrapfly.io/docs/scrape-api/extraction)
- [Javascript Rendering](https://scrapfly.io/docs/scrape-api/javascript-rendering)
- [Javascript Scenario](https://scrapfly.io/docs/scrape-api/javascript-scenario)
- [SSL](https://scrapfly.io/docs/scrape-api/ssl)
- [DNS](https://scrapfly.io/docs/scrape-api/dns)
- [Cache](https://scrapfly.io/docs/scrape-api/cache)
- [Session](https://scrapfly.io/docs/scrape-api/session)
- [Webhook](https://scrapfly.io/docs/scrape-api/webhook)
- [Screenshot](https://scrapfly.io/docs/scrape-api/screenshot)
- [Errors](https://scrapfly.io/docs/scrape-api/errors)
- [Timeout](https://scrapfly.io/docs/scrape-api/understand-timeout)
- [Throttling](https://scrapfly.io/docs/throttling)
- [Troubleshoot](https://scrapfly.io/docs/scrape-api/troubleshoot)
- [Billing](https://scrapfly.io/docs/scrape-api/billing)
- [FAQ](https://scrapfly.io/docs/scrape-api/faq)

#### Crawler API

- [Getting Started](https://scrapfly.io/docs/crawler-api/getting-started)
- [API Specification]()
- [Retrieving Results](https://scrapfly.io/docs/crawler-api/results)
- [WARC Format](https://scrapfly.io/docs/crawler-api/warc-format)
- [Data Extraction](https://scrapfly.io/docs/crawler-api/extraction-rules)
- [Webhook](https://scrapfly.io/docs/crawler-api/webhook)
- [Billing](https://scrapfly.io/docs/crawler-api/billing)
- [Errors](https://scrapfly.io/docs/crawler-api/errors)
- [Troubleshoot](https://scrapfly.io/docs/crawler-api/troubleshoot)
- [FAQ](https://scrapfly.io/docs/crawler-api/faq)

#### Screenshot API

- [Getting Started](https://scrapfly.io/docs/screenshot-api/getting-started)
- [API Specification]()
- [Accessibility Testing](https://scrapfly.io/docs/screenshot-api/accessibility)
- [Webhook](https://scrapfly.io/docs/screenshot-api/webhook)
- [Billing](https://scrapfly.io/docs/screenshot-api/billing)
- [Errors](https://scrapfly.io/docs/screenshot-api/errors)

#### Extraction API

- [Getting Started](https://scrapfly.io/docs/extraction-api/getting-started)
- [API Specification]()
- [Rules Template](https://scrapfly.io/docs/extraction-api/rules-and-template)
- [LLM Extraction](https://scrapfly.io/docs/extraction-api/llm-prompt)
- [AI Auto Extraction](https://scrapfly.io/docs/extraction-api/automatic-ai)
- [Webhook](https://scrapfly.io/docs/extraction-api/webhook)
- [Billing](https://scrapfly.io/docs/extraction-api/billing)
- [Errors](https://scrapfly.io/docs/extraction-api/errors)
- [FAQ](https://scrapfly.io/docs/extraction-api/faq)

#### Proxy Saver

- [Getting Started](https://scrapfly.io/docs/proxy-saver/getting-started)
- [Fingerprints](https://scrapfly.io/docs/proxy-saver/fingerprints)
- [Optimizations](https://scrapfly.io/docs/proxy-saver/optimizations)
- [SSL Certificates](https://scrapfly.io/docs/proxy-saver/certificates)
- [Protocols](https://scrapfly.io/docs/proxy-saver/protocols)
- [Pacfile](https://scrapfly.io/docs/proxy-saver/pacfile)
- [Secure Credentials](https://scrapfly.io/docs/proxy-saver/security)
- [Billing](https://scrapfly.io/docs/proxy-saver/billing)

#### Cloud Browser API

- [Getting Started](https://scrapfly.io/docs/cloud-browser-api/getting-started)
- [Proxy & Geo-Targeting](https://scrapfly.io/docs/cloud-browser-api/proxy)
- [Unblock API](https://scrapfly.io/docs/cloud-browser-api/unblock)
- [File Downloads](https://scrapfly.io/docs/cloud-browser-api/file-downloads)
- [Session Resume](https://scrapfly.io/docs/cloud-browser-api/session-resume)
- [Human-in-the-Loop](https://scrapfly.io/docs/cloud-browser-api/human-in-the-loop)
- [Debug Mode](https://scrapfly.io/docs/cloud-browser-api/debug-mode)
- [Bring Your Own Proxy](https://scrapfly.io/docs/cloud-browser-api/bring-your-own-proxy)
- [Browser Extensions](https://scrapfly.io/docs/cloud-browser-api/extensions)
##### Integrations

- [Puppeteer](https://scrapfly.io/docs/cloud-browser-api/puppeteer)
- [Playwright](https://scrapfly.io/docs/cloud-browser-api/playwright)
- [Selenium](https://scrapfly.io/docs/cloud-browser-api/selenium)
- [Vercel Agent Browser](https://scrapfly.io/docs/cloud-browser-api/agent-browser)
- [Browser Use](https://scrapfly.io/docs/cloud-browser-api/browser-use)
- [Stagehand](https://scrapfly.io/docs/cloud-browser-api/stagehand)

- [Billing](https://scrapfly.io/docs/cloud-browser-api/billing)
- [Errors](https://scrapfly.io/docs/cloud-browser-api/errors)


### Tools

- [Antibot Detector](https://scrapfly.io/docs/tools/antibot-detector)

### SDK

- [Golang](https://scrapfly.io/docs/sdk/golang)
- [Python](https://scrapfly.io/docs/sdk/python)
- [Rust](https://scrapfly.io/docs/sdk/rust)
- [TypeScript](https://scrapfly.io/docs/sdk/typescript)
- [Scrapy](https://scrapfly.io/docs/sdk/scrapy)

### Integrations

- [Getting Started](https://scrapfly.io/docs/integration/getting-started)
- [LangChain](https://scrapfly.io/docs/integration/langchain)
- [LlamaIndex](https://scrapfly.io/docs/integration/llamaindex)
- [CrewAI](https://scrapfly.io/docs/integration/crewai)
- [Zapier](https://scrapfly.io/docs/integration/zapier)
- [Make](https://scrapfly.io/docs/integration/make)
- [n8n](https://scrapfly.io/docs/integration/n8n)

### Academy

- [Overview](https://scrapfly.io/academy)
- [Web Scraping Overview](https://scrapfly.io/academy/scraping-overview)
- [Tools](https://scrapfly.io/academy/tools-overview)
- [Reverse Engineering](https://scrapfly.io/academy/reverse-engineering)
- [Static Scraping](https://scrapfly.io/academy/static-scraping)
- [HTML Parsing](https://scrapfly.io/academy/html-parsing)
- [Dynamic Scraping](https://scrapfly.io/academy/dynamic-scraping)
- [Hidden API Scraping](https://scrapfly.io/academy/hidden-api-scraping)
- [Headless Browsers](https://scrapfly.io/academy/headless-browsers)
- [Hidden Web Data](https://scrapfly.io/academy/hidden-web-data)
- [JSON Parsing](https://scrapfly.io/academy/json-parsing)
- [Data Processing](https://scrapfly.io/academy/data-processing)
- [Scaling](https://scrapfly.io/academy/scaling)
- [Walkthrough Summary](https://scrapfly.io/academy/walkthrough-summary)
- [Scraper Blocking](https://scrapfly.io/academy/scraper-blocking)
- [Proxies](https://scrapfly.io/academy/proxies)

---

# Understanding Scrapfly Timeouts

 [  View as markdown ](https://scrapfly.io/?view=markdown)   Copy for LLM    Copy for LLM  [     Open in ChatGPT ](https://chatgpt.com/?hints=search&prompt=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fscrape-api%2Funderstand-timeout%3Flanguage%3Dpython%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Claude ](https://claude.ai/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fscrape-api%2Funderstand-timeout%3Flanguage%3Dpython%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Perplexity ](https://www.perplexity.ai/search/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fscrape-api%2Funderstand-timeout%3Flanguage%3Dpython%20so%20I%20can%20ask%20questions%20about%20it.) 

 

 

 Scrapfly's [timeout configuration](https://scrapfly.io/docs/scrape-api/getting-started?language=python#api_param_timeouts) allows you to **set a deadline** for each scrape request. If a scrape doesn't complete within the defined timeout, it will be stopped and a Scrapfly error response will be returned.

> **Critical: Configure Your HTTP Client Timeout**
>  For the best experience, configure your HTTP client with a **minimum timeout of 155 seconds**.
>  If you use a custom Scrapfly timeout, add `+5s` overhead to your client read timeout.

## Quick Reference 

 Common timeout configurations for different scenarios:

 | Scenario | Scrapfly Parameters | Your HTTP Client Timeout |
|---|---|---|
| **Default (Managed by Scrapfly)**  Best for most use cases | `retry=true`  (default) | 155s |
| **Simple HTML Scraping**  No JavaScript, no ASP | `retry=false`  `timeout=15000` | 20s  (15s + 5s overhead) |
| **JavaScript Rendering**  Browser-based scraping | `retry=false`  `timeout=30000` | 35s  (30s + 5s overhead) |
| **Anti-Scraping Protection (ASP)**  Bypassing bot protection | `retry=false`  `timeout=60000` | 65s  (60s + 5s overhead) |
| **Complex JavaScript Scenarios**  Multi-step browser automation | `retry=false`  `timeout=90000` | 95s  (90s + 5s overhead) |

- Basics
- Timeout Flow
- Usage Examples
- FAQ
 
### How Timeouts Work

 Scrapfly scrape speeds depend on many factors including:

- **JavaScript rendering:** Browser-based scraping takes longer than simple HTTP requests
- **JavaScript scenarios:** Complex browser automation adds execution time
- **Anti-bot bypass:** Solving CAPTCHAs and bypassing protection mechanisms requires additional time
- **Website performance:** Slow or unresponsive websites naturally take longer to scrape
 
 **Typical scrape durations:**

- Simple scrapes: **Less than 5 seconds**
- JavaScript rendering: **10-30 seconds**
- Complex scenarios or anti-bot bypass: **30-90 seconds**
 
### When Should I Configure Timeout?

 Generally, it's best to trust Scrapfly's default timeout management (`retry=true`). However, custom timeouts are useful for:

#####   Real-Time Scraping 

 When you need the fastest possible response and can accept failures, use lower timeouts to avoid waiting unnecessarily.

 

 

 

#####   Slow Websites 

 For websites with heavy JavaScript or slow response times, increase the timeout to allow more time for completion.

 

 

 

 

#####   Complex Automation 

 JavaScript scenarios with multiple steps (clicking, scrolling, form filling) require longer timeouts to complete all actions.

 

 

 

#####   Anti-Bot Bypass 

 When using ASP with `retry=false`, increase timeout to at least 60 seconds to allow time for protection bypass.

 

 

 

 

> **Important:** Custom timeout configuration requires `retry=false`. With `retry=true` (default), Scrapfly automatically manages timeouts for optimal results.

### Timeout Requirements &amp; Limits

 Timeout requirements vary based on enabled features:

 | Configuration | Default Timeout | Minimum Allowed | Maximum Allowed |
|---|---|---|---|
| `asp=false, js=false` | 15s | 15s | 30s |
| `asp=false, js=true` (no scenario) | 30s | 30s | 60s |
| `asp=false, js=true` (with scenario) | 30s | 30s | 90s |
| `asp=true` | 30s | 30s | 150s |

> **ASP + retry=false Recommendation:** When using `asp=true` with `retry=false`, the default 30s timeout may not be sufficient. We recommend a **minimum of 60 seconds** to allow adequate time for anti-bot protection bypass.

 

### Timeout Flow Visualization

 This diagram shows how Scrapfly determines timeout values based on your configuration. **Blue dashed boxes** indicate configurable timeouts, while **red boxes** indicate timeouts managed automatically by Scrapfly.



> #####  Understanding the Diagram
> 
> - **retry=true (Right Path):** Scrapfly automatically manages timeouts and retries. Your client timeout should be **155 seconds**.
> - **retry=false (Left Path):** You control the timeout explicitly. Add **+5s overhead** to your client timeout.
> - **Blue Dashed Boxes:** Timeouts you can customize with the `timeout` parameter.
> - **Red Boxes:** Fixed timeouts managed by Scrapfly (with `retry=true`).

 

### Usage Examples

 To configure a custom scrape timeout, use `retry=false` and `timeout=<milliseconds>` query parameters.

#### Example: 20 Second Timeout

 [Python](#player-141628) [HTTP](#http-141628) 

   [  ](https://scrapfly.io/login "Sign in to try from the API player") 

 

 ```
import requests

url = "https://api.scrapfly.io/scrape"

params = {
    "retry": False,
    "timeout": 20000,
    "key": "__API_KEY__",
    "url": "https://httpbin.dev/delay/5",
}

response = requests.request("GET", url, params=params)
# Raise exception for HTTP errors (4xx, 5xx)
response.raise_for_status()

data = response.json()
print(data)

# Access the scrape result
if 'result' in data:
    print(data['result'])

```

 

 ```
https://api.scrapfly.io/scrape?retry=false&timeout=20000&key=&url=https%3A%2F%2Fhttpbin.dev%2Fdelay%2F5
```

 

 

 

> **Remember:** Your HTTP client timeout should be **25 seconds** (20s + 5s overhead) for this example.

#### Client Configuration Examples

- Python
- Node.js
- PHP
 
 ```
Python client with 95s timeout
```

 

   

 

 

 ```
TypeScript / Node.js client with 90s server timeout
```

 

   

 

 

 ```
PHP client with 95s timeout
```

 

   

 

 

 

 

 

### Frequently Asked Questions

#####    I want to run a JavaScript scenario that requires 90s in the worst case. How should I configure it?  

 

**Configuration:**

- Scrapfly: `retry=false&timeout=90000`
- Your HTTP client: **95 seconds** (90s + 5s overhead)
 
 This ensures your JavaScript scenario has the full 90 seconds to complete, and your client won't disconnect prematurely.

 

 

 

#####    I'm scraping a website without JavaScript and want the lowest timeout possible. What should I use?  

 

**Configuration:**

- Scrapfly: `retry=false&timeout=15000`
- Your HTTP client: **20 seconds** (15s + 5s overhead)
 
 **Note:** This only works when `asp=false` and `render_js=false`. 15 seconds is the minimum allowed timeout for simple HTTP scraping.

 

 

 

#####    Should I always use retry=false for custom timeouts?  

 

 **Yes.** Custom timeout configuration requires `retry=false`. When `retry=true` (default), Scrapfly automatically manages timeouts and retries for optimal reliability.

 **Use retry=true when:**

- You want maximum reliability and don't mind longer wait times
- You're scraping difficult targets with anti-bot protection
- You want Scrapfly to handle retries automatically
 
 **Use retry=false when:**

- You need precise control over timeout durations
- You're implementing your own retry logic
- You need the fastest possible response (fail fast)
 
 

 

 

#####    Why do I need to add +5s overhead to my HTTP client timeout?  

 

 The +5s overhead accounts for:

- **Network latency:** Time for request/response transmission
- **Processing overhead:** Time for Scrapfly to process and package the response
- **Connection establishment:** Initial connection setup time
 
 Without this overhead, your client might disconnect before receiving Scrapfly's response, even if the scrape completed successfully within the timeout.

 

 

 

#####    What happens if a scrape exceeds the timeout?  

 

 When a scrape exceeds the configured timeout:

1. The scrape operation is immediately stopped
2. A Scrapfly error response is returned
3. You'll receive one of the timeout-related error codes (see below)
4. No partial data is returned
 
 Check the [Related Errors](#errors) section for specific timeout error codes and their meanings.

 

 

 

 

 

 

 

## Related Errors 

 When a timeout occurs, you may encounter one of the following error codes. Click on each error for detailed information and troubleshooting steps.

- [ ERR::SCRAPE::OPERATION\_TIMEOUT ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::OPERATION_TIMEOUT?iframe=1)
- [ ERR::SCRAPE::SCENARIO\_TIMEOUT ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::SCENARIO_TIMEOUT?iframe=1)
- [ ERR::ASP::TIMEOUT ](https://scrapfly.io/docs/scrape-api/error/ERR::ASP::TIMEOUT?iframe=1)
- [ ERR::SCRAPE::DRIVER\_TIMEOUT ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::DRIVER_TIMEOUT?iframe=1)
 
 

- [ ERR::ASP::CAPTCHA\_TIMEOUT ](https://scrapfly.io/docs/scrape-api/error/ERR::ASP::CAPTCHA_TIMEOUT?iframe=1)
- [ ERR::PROXY::TIMEOUT ](https://scrapfly.io/docs/scrape-api/error/ERR::PROXY::TIMEOUT?iframe=1)
- [ ERR::EXTRACTION::TIMEOUT ](https://scrapfly.io/docs/scrape-api/error/ERR::EXTRACTION::TIMEOUT?iframe=1)