# Scrapfly Documentation

## Table of Contents

### Dashboard

- [Intro](https://scrapfly.io/docs)
- [Project](https://scrapfly.io/docs/project)
- [Account](https://scrapfly.io/docs/account)
- [Workspace & Team](https://scrapfly.io/docs/workspace-and-team)
- [Billing](https://scrapfly.io/docs/billing)

### Products

#### MCP Server

- [Getting Started](https://scrapfly.io/docs/mcp/getting-started)
- [Tools & API Spec](https://scrapfly.io/docs/mcp/tools)
- [Authentication](https://scrapfly.io/docs/mcp/authentication)
- [Examples & Use Cases](https://scrapfly.io/docs/mcp/examples)
- [FAQ](https://scrapfly.io/docs/mcp/faq)
##### Integrations

- [Overview](https://scrapfly.io/docs/mcp/integrations)
- [Claude Desktop](https://scrapfly.io/docs/mcp/integrations/claude-desktop)
- [Claude Code](https://scrapfly.io/docs/mcp/integrations/claude-code)
- [ChatGPT](https://scrapfly.io/docs/mcp/integrations/chatgpt)
- [Cursor](https://scrapfly.io/docs/mcp/integrations/cursor)
- [Cline](https://scrapfly.io/docs/mcp/integrations/cline)
- [Windsurf](https://scrapfly.io/docs/mcp/integrations/windsurf)
- [Zed](https://scrapfly.io/docs/mcp/integrations/zed)
- [Roo Code](https://scrapfly.io/docs/mcp/integrations/roo-code)
- [VS Code](https://scrapfly.io/docs/mcp/integrations/vscode)
- [LangChain](https://scrapfly.io/docs/mcp/integrations/langchain)
- [LlamaIndex](https://scrapfly.io/docs/mcp/integrations/llamaindex)
- [CrewAI](https://scrapfly.io/docs/mcp/integrations/crewai)
- [OpenAI](https://scrapfly.io/docs/mcp/integrations/openai)
- [n8n](https://scrapfly.io/docs/mcp/integrations/n8n)
- [Make](https://scrapfly.io/docs/mcp/integrations/make)
- [Zapier](https://scrapfly.io/docs/mcp/integrations/zapier)
- [Vapi AI](https://scrapfly.io/docs/mcp/integrations/vapi)
- [Agent Builder](https://scrapfly.io/docs/mcp/integrations/agent-builder)
- [Custom Client](https://scrapfly.io/docs/mcp/integrations/custom-client)


#### Web Scraping API

- [Getting Started](https://scrapfly.io/docs/scrape-api/getting-started)
- [API Specification]()
- [Monitoring](https://scrapfly.io/docs/monitoring)
- [Customize Request](https://scrapfly.io/docs/scrape-api/custom)
- [Debug](https://scrapfly.io/docs/scrape-api/debug)
- [Anti Scraping Protection](https://scrapfly.io/docs/scrape-api/anti-scraping-protection)
- [Proxy](https://scrapfly.io/docs/scrape-api/proxy)
- [Proxy Mode](https://scrapfly.io/docs/scrape-api/proxy-mode)
- [Proxy Mode - Screaming Frog](https://scrapfly.io/docs/scrape-api/proxy-mode/screaming-frog)
- [Proxy Mode - Apify](https://scrapfly.io/docs/scrape-api/proxy-mode/apify)
- [(Auto) Data Extraction](https://scrapfly.io/docs/scrape-api/extraction)
- [Javascript Rendering](https://scrapfly.io/docs/scrape-api/javascript-rendering)
- [Javascript Scenario](https://scrapfly.io/docs/scrape-api/javascript-scenario)
- [SSL](https://scrapfly.io/docs/scrape-api/ssl)
- [DNS](https://scrapfly.io/docs/scrape-api/dns)
- [Cache](https://scrapfly.io/docs/scrape-api/cache)
- [Session](https://scrapfly.io/docs/scrape-api/session)
- [Webhook](https://scrapfly.io/docs/scrape-api/webhook)
- [Screenshot](https://scrapfly.io/docs/scrape-api/screenshot)
- [Errors](https://scrapfly.io/docs/scrape-api/errors)
- [Timeout](https://scrapfly.io/docs/scrape-api/understand-timeout)
- [Throttling](https://scrapfly.io/docs/throttling)
- [Troubleshoot](https://scrapfly.io/docs/scrape-api/troubleshoot)
- [Billing](https://scrapfly.io/docs/scrape-api/billing)
- [FAQ](https://scrapfly.io/docs/scrape-api/faq)

#### Crawler API

- [Getting Started](https://scrapfly.io/docs/crawler-api/getting-started)
- [API Specification]()
- [Retrieving Results](https://scrapfly.io/docs/crawler-api/results)
- [WARC Format](https://scrapfly.io/docs/crawler-api/warc-format)
- [Data Extraction](https://scrapfly.io/docs/crawler-api/extraction-rules)
- [Webhook](https://scrapfly.io/docs/crawler-api/webhook)
- [Billing](https://scrapfly.io/docs/crawler-api/billing)
- [Errors](https://scrapfly.io/docs/crawler-api/errors)
- [Troubleshoot](https://scrapfly.io/docs/crawler-api/troubleshoot)
- [FAQ](https://scrapfly.io/docs/crawler-api/faq)

#### Screenshot API

- [Getting Started](https://scrapfly.io/docs/screenshot-api/getting-started)
- [API Specification]()
- [Accessibility Testing](https://scrapfly.io/docs/screenshot-api/accessibility)
- [Webhook](https://scrapfly.io/docs/screenshot-api/webhook)
- [Billing](https://scrapfly.io/docs/screenshot-api/billing)
- [Errors](https://scrapfly.io/docs/screenshot-api/errors)

#### Extraction API

- [Getting Started](https://scrapfly.io/docs/extraction-api/getting-started)
- [API Specification]()
- [Rules Template](https://scrapfly.io/docs/extraction-api/rules-and-template)
- [LLM Extraction](https://scrapfly.io/docs/extraction-api/llm-prompt)
- [AI Auto Extraction](https://scrapfly.io/docs/extraction-api/automatic-ai)
- [Webhook](https://scrapfly.io/docs/extraction-api/webhook)
- [Billing](https://scrapfly.io/docs/extraction-api/billing)
- [Errors](https://scrapfly.io/docs/extraction-api/errors)
- [FAQ](https://scrapfly.io/docs/extraction-api/faq)

#### Proxy Saver

- [Getting Started](https://scrapfly.io/docs/proxy-saver/getting-started)
- [Fingerprints](https://scrapfly.io/docs/proxy-saver/fingerprints)
- [Optimizations](https://scrapfly.io/docs/proxy-saver/optimizations)
- [SSL Certificates](https://scrapfly.io/docs/proxy-saver/certificates)
- [Protocols](https://scrapfly.io/docs/proxy-saver/protocols)
- [Pacfile](https://scrapfly.io/docs/proxy-saver/pacfile)
- [Secure Credentials](https://scrapfly.io/docs/proxy-saver/security)
- [Billing](https://scrapfly.io/docs/proxy-saver/billing)

#### Cloud Browser API

- [Getting Started](https://scrapfly.io/docs/cloud-browser-api/getting-started)
- [Proxy & Geo-Targeting](https://scrapfly.io/docs/cloud-browser-api/proxy)
- [Unblock API](https://scrapfly.io/docs/cloud-browser-api/unblock)
- [File Downloads](https://scrapfly.io/docs/cloud-browser-api/file-downloads)
- [Session Resume](https://scrapfly.io/docs/cloud-browser-api/session-resume)
- [Human-in-the-Loop](https://scrapfly.io/docs/cloud-browser-api/human-in-the-loop)
- [Debug Mode](https://scrapfly.io/docs/cloud-browser-api/debug-mode)
- [Bring Your Own Proxy](https://scrapfly.io/docs/cloud-browser-api/bring-your-own-proxy)
- [Browser Extensions](https://scrapfly.io/docs/cloud-browser-api/extensions)
##### Integrations

- [Puppeteer](https://scrapfly.io/docs/cloud-browser-api/puppeteer)
- [Playwright](https://scrapfly.io/docs/cloud-browser-api/playwright)
- [Selenium](https://scrapfly.io/docs/cloud-browser-api/selenium)
- [Vercel Agent Browser](https://scrapfly.io/docs/cloud-browser-api/agent-browser)
- [Browser Use](https://scrapfly.io/docs/cloud-browser-api/browser-use)
- [Stagehand](https://scrapfly.io/docs/cloud-browser-api/stagehand)

- [Billing](https://scrapfly.io/docs/cloud-browser-api/billing)
- [Errors](https://scrapfly.io/docs/cloud-browser-api/errors)


### Tools

- [Antibot Detector](https://scrapfly.io/docs/tools/antibot-detector)

### SDK

- [Golang](https://scrapfly.io/docs/sdk/golang)
- [Python](https://scrapfly.io/docs/sdk/python)
- [Rust](https://scrapfly.io/docs/sdk/rust)
- [TypeScript](https://scrapfly.io/docs/sdk/typescript)
- [Scrapy](https://scrapfly.io/docs/sdk/scrapy)

### Integrations

- [Getting Started](https://scrapfly.io/docs/integration/getting-started)
- [LangChain](https://scrapfly.io/docs/integration/langchain)
- [LlamaIndex](https://scrapfly.io/docs/integration/llamaindex)
- [CrewAI](https://scrapfly.io/docs/integration/crewai)
- [Zapier](https://scrapfly.io/docs/integration/zapier)
- [Make](https://scrapfly.io/docs/integration/make)
- [n8n](https://scrapfly.io/docs/integration/n8n)

### Academy

- [Overview](https://scrapfly.io/academy)
- [Web Scraping Overview](https://scrapfly.io/academy/scraping-overview)
- [Tools](https://scrapfly.io/academy/tools-overview)
- [Reverse Engineering](https://scrapfly.io/academy/reverse-engineering)
- [Static Scraping](https://scrapfly.io/academy/static-scraping)
- [HTML Parsing](https://scrapfly.io/academy/html-parsing)
- [Dynamic Scraping](https://scrapfly.io/academy/dynamic-scraping)
- [Hidden API Scraping](https://scrapfly.io/academy/hidden-api-scraping)
- [Headless Browsers](https://scrapfly.io/academy/headless-browsers)
- [Hidden Web Data](https://scrapfly.io/academy/hidden-web-data)
- [JSON Parsing](https://scrapfly.io/academy/json-parsing)
- [Data Processing](https://scrapfly.io/academy/data-processing)
- [Scaling](https://scrapfly.io/academy/scaling)
- [Walkthrough Summary](https://scrapfly.io/academy/walkthrough-summary)
- [Scraper Blocking](https://scrapfly.io/academy/scraper-blocking)
- [Proxies](https://scrapfly.io/academy/proxies)

---

# Crawler API Errors

 [  View as markdown ](https://scrapfly.io/?view=markdown)   Copy for LLM    Copy for LLM  [     Open in ChatGPT ](https://chatgpt.com/?hints=search&prompt=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fcrawler-api%2Ferrors%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Claude ](https://claude.ai/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fcrawler-api%2Ferrors%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Perplexity ](https://www.perplexity.ai/search/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fcrawler-api%2Ferrors%20so%20I%20can%20ask%20questions%20about%20it.) 

 

 

 The Crawler API returns standard HTTP status codes and detailed error information to help you troubleshoot issues. This page lists error codes specific to crawler operations and inherited errors from the Web Scraping API.

> **Note:** Crawler API also inherits all error codes from the [Web Scraping API](https://scrapfly.io/docs/scrape-api/errors) since each crawled page is treated as a scrape request.

## Crawler-Specific Errors 

 The Crawler API has specific error codes that are unique to crawler operations:

####  [ ERR::CRAWLER::ALREADY\_SCHEDULED  ](https://scrapfly.io/docs/crawler-api/error/ERR::CRAWLER::ALREADY_SCHEDULED) 

The given crawler uuid is already scheduled

 

- **Retryable:** No
- **HTTP status code:** `422`
- **Documentation:**
    - [Crawler Documentation](https://scrapfly.io/docs/crawler-api/getting-started)
    - [Crawler Troubleshooting](https://scrapfly.io/docs/crawler-api/troubleshoot)
    - [Related Error Doc](https://scrapfly.io/docs/crawler-api/error/ERR::CRAWLER::ALREADY_SCHEDULED)
 
 

 

 

 

####  [ ERR::CRAWLER::CONFIG\_ERROR  ](https://scrapfly.io/docs/crawler-api/error/ERR::CRAWLER::CONFIG_ERROR) 

Crawler configuration error

 

- **Retryable:** No
- **HTTP status code:** `400`
- **Documentation:**
    - [Crawler Documentation](https://scrapfly.io/docs/crawler-api/getting-started)
    - [Related Error Doc](https://scrapfly.io/docs/crawler-api/error/ERR::CRAWLER::CONFIG_ERROR)
 
 

 

 

 

####  [ ERR::CRAWLER::TIMEOUT  ](https://scrapfly.io/docs/crawler-api/error/ERR::CRAWLER::TIMEOUT) 

Crawler exceeded time limit

 

- **Retryable:** No
- **HTTP status code:** `408`
- **Documentation:**
    - [Crawler Documentation](https://scrapfly.io/docs/crawler-api/getting-started)
    - [Crawler Troubleshooting](https://scrapfly.io/docs/crawler-api/troubleshoot)
    - [Related Error Doc](https://scrapfly.io/docs/crawler-api/error/ERR::CRAWLER::TIMEOUT)
    - [Timeout Configuration](https://scrapfly.io/docs/crawler-api/getting-started#max_duration)
 
 

 

 

 

## Intelligent Error Handling 

 The Crawler automatically monitors and responds to errors during execution, protecting your crawl budget and preventing wasted API credits. Different error types trigger different automated responses.

> **Automatic Protection:** The Crawler intelligently stops, throttles, or monitors based on error patterns. You don't need to manually handle most error scenarios - the system protects you automatically.

### Fatal Errors - Immediate Stop 

 These errors immediately stop the crawler to prevent unnecessary API credit consumption. When encountered, the crawler terminates gracefully and returns results for URLs already crawled.

> **Immediate Termination:** Fatal errors stop the crawler instantly. Review and resolve these issues before restarting.

**Fatal error codes:**

- [ `ERR::SCRAPE::PROJECT_QUOTA_LIMIT_REACHED` ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::PROJECT_QUOTA_LIMIT_REACHED) - Your project has reached its API credit limit
- [ `ERR::SCRAPE::QUOTA_LIMIT_REACHED` ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::QUOTA_LIMIT_REACHED) - Your account has reached its API credit limit
- [ `ERR::THROTTLE::MAX_API_CREDIT_BUDGET_EXCEEDED` ](https://scrapfly.io/docs/scrape-api/error/ERR::THROTTLE::MAX_API_CREDIT_BUDGET_EXCEEDED) - Monthly budget exceeded
- [ `ERR::ACCOUNT::PAYMENT_REQUIRED` ](https://scrapfly.io/docs/scrape-api/error/ERR::ACCOUNT::PAYMENT_REQUIRED) - Payment required to continue service
- [ `ERR::ACCOUNT::SUSPENDED` ](https://scrapfly.io/docs/scrape-api/error/ERR::ACCOUNT::SUSPENDED) - Account suspended
 
**What happens when a fatal error occurs:**

1. Crawler stops immediately (no new URLs are crawled)
2. URLs already crawled are saved with their results
3. Crawler status transitions to `completed` or `failed`
4. Error details are included in the crawler response
 
### Throttle Errors - Automatic Pause 

 These errors trigger an automatic 5-second pause before the crawler continues. This prevents overwhelming your account limits or proxy resources while allowing the crawl to complete successfully.

> **Automatic Recovery:** The crawler pauses for 5 seconds when throttle errors occur, then resumes automatically. This is normal behavior and helps your crawl complete successfully.

**Throttle error codes:**

- [ `ERR::THROTTLE::MAX_REQUEST_RATE_EXCEEDED` ](https://scrapfly.io/docs/scrape-api/error/ERR::THROTTLE::MAX_REQUEST_RATE_EXCEEDED) - Request rate limit exceeded
- [ `ERR::THROTTLE::MAX_CONCURRENT_REQUEST_EXCEEDED` ](https://scrapfly.io/docs/scrape-api/error/ERR::THROTTLE::MAX_CONCURRENT_REQUEST_EXCEEDED) - Concurrent request limit exceeded
- [ `ERR::PROXY::RESOURCES_SATURATION` ](https://scrapfly.io/docs/scrape-api/error/ERR::PROXY::RESOURCES_SATURATION) - Proxy pool temporarily saturated
- [ `ERR::SESSION::CONCURRENT_ACCESS` ](https://scrapfly.io/docs/scrape-api/error/ERR::SESSION::CONCURRENT_ACCESS) - Session concurrency limit reached
 
**What happens during throttling:**

1. Crawler pauses for 5 seconds
2. Failed URL is added back to the queue for retry
3. Crawler continues with next URLs after pause
4. Process repeats if throttle error occurs again
 
 ```
{
  "status": "RUNNING",
  "state": {
    "urls_visited": 47,
    "urls_to_crawl": 153
  },
  "recent_event": "Throttle pause: MAX_REQUEST_RATE_EXCEEDED - resuming in 5s"
}
```

 

   

 

 

 

### High Failure Rate Protection 

 For certain error types (anti-scraping protection and internal errors), the crawler monitors the failure rate and automatically stops if it becomes too high. This prevents wasting credits on a crawl that's unlikely to succeed.

> **Smart Monitoring:** The crawler tracks failure rates for ASP and internal errors. If 70% or more of the last 10 scrapes fail, the crawler stops automatically to protect your credits.

**Monitored error codes:**

- [ `ERR::ASP::SHIELD_ERROR` ](https://scrapfly.io/docs/scrape-api/error/ERR::ASP::SHIELD_ERROR) - Anti-scraping protection error
- [ `ERR::ASP::SHIELD_PROTECTION_FAILED` ](https://scrapfly.io/docs/scrape-api/error/ERR::ASP::SHIELD_PROTECTION_FAILED) - Failed to bypass anti-scraping protection
- [ `ERR::API::INTERNAL_ERROR` ](https://scrapfly.io/docs/scrape-api/error/ERR::API::INTERNAL_ERROR) - Internal API error
 
**Failure rate threshold:**

- **Monitoring window:** Last 10 scrape requests
- **Threshold:** 70% failure rate (7 or more failures out of 10)
- **Action:** Crawler stops immediately to prevent credit waste
- **Reason:** Indicates systematic issue (website blocking, ASP changes, API issues)
 
 ```
{
  "status": "DONE",
  "is_success": false,
  "state": {
    "urls_visited": 15,
    "urls_failed": 12,
    "stop_reason": "crawler_error"
  },
  "error": {
    "code": "ERR::CRAWLER::HIGH_FAILURE_RATE",
    "message": "Crawler stopped: High failure rate detected (8/10 requests failed)",
    "details": {
      "failure_rate": 0.80,
      "threshold": 0.70,
      "recent_errors": ["ERR::ASP::SHIELD_ERROR", "ERR::ASP::SHIELD_PROTECTION_FAILED"]
    }
  }
}
```

 

   

 

 

 

**How to handle high failure rate stops:**

1. **Review error logs:** Check which specific errors are occurring most frequently
2. **ASP errors:** The target site may have updated their protection - contact support for assistance
3. **Adjust configuration:** Try different `asp` settings, proxy pools, or rendering options
4. **Wait and retry:** Some sites have temporary blocks that clear after a period
5. **Contact support:** If issues persist, our team can help analyze and resolve ASP challenges
 
### Error Statistics &amp; Monitoring 

 When a crawler completes (successfully or due to errors), comprehensive error statistics are logged and available for analysis. This helps you understand what went wrong and how to improve future crawls.

**Statistics tracked:**

- Total errors encountered
- Breakdown by error code (e.g., 3x `ERR::THROTTLE::MAX_REQUEST_RATE_EXCEEDED`)
- Fatal errors that stopped the crawler
- Throttle events and pause counts
- High failure rate trigger details
 
 ```
{
  "crawler_uuid": "abc123...",
  "status": "DONE",
  "is_success": true,
  "state": {
    "urls_visited": 847,
    "urls_failed": 23
  },
  "error_summary": {
    "total_errors": 23,
    "by_code": {
      "ERR::THROTTLE::MAX_REQUEST_RATE_EXCEEDED": 15,
      "ERR::PROXY::CONNECTION_TIMEOUT": 5,
      "ERR::ASP::SHIELD_ERROR": 3
    },
    "throttle_pauses": 15,
    "fatal_stops": 0,
    "high_failure_rate_stops": 0
  }
}
```

 

   

 

 

 

**Accessing error details:**

1. **Crawler summary:** Use `GET /crawl/{uuid}` to view overall error statistics
2. **Failed URLs:** Use `GET /crawl/{uuid}/urls?status=failed` to retrieve specific failed URLs with error codes
3. **Logs:** Check your crawler logs for detailed error tracking information
 
## Inherited Web Scraping API Errors 

 Since the Crawler API makes individual scraping requests for each page crawled, it can return **any error from the Web Scraping API**. Each page crawled follows the same error handling as a single scrape request.

> **Important:** When a page fails to crawl, the error details are stored in the crawl results. You can retrieve failed URLs and their error codes using the `/crawl/{uuid}/urls?status=failed` endpoint.

**Common inherited errors by category:**

### Scraping Errors

####  [ ERR::SCRAPE::BAD\_PROTOCOL  ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::BAD_PROTOCOL) 

The protocol is not supported only http:// or https:// are supported

 

- **Retryable:** No
- **HTTP status code:** `422`
- **Documentation:**
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::BAD_PROTOCOL)
 
 

 

 

 

####  [ ERR::SCRAPE::BAD\_UPSTREAM\_RESPONSE  ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::BAD_UPSTREAM_RESPONSE) 

The website you target respond with an unexpected status code (&gt;400)

 

- **Retryable:** No
- **HTTP status code:** `200`
- **Documentation:**
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::BAD_UPSTREAM_RESPONSE)
 
 

 

 

 

####  [ ERR::SCRAPE::CONFIG\_ERROR  ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::CONFIG_ERROR) 

Scrape Configuration Error

 

- **Retryable:** No
- **HTTP status code:** `400`
- **Documentation:**
    - [Getting Started](https://scrapfly.io/docs/scrape-api/getting-started)
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::CONFIG_ERROR)
 
 

 

 

 

####  [ ERR::SCRAPE::COST\_BUDGET\_LIMIT  ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::COST_BUDGET_LIMIT) 

Cost budget has been reached, you must increase the budget to pass this target

 

- **Retryable:** Yes
- **HTTP status code:** `422`
- **Documentation:**
    - [Checkout ASP documentation](https://scrapfly.io/docs/scrape-api/anti-scraping-protection)
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::COST_BUDGET_LIMIT)
 
 

 

 

 

####  [ ERR::SCRAPE::COUNTRY\_NOT\_AVAILABLE\_FOR\_TARGET  ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::COUNTRY_NOT_AVAILABLE_FOR_TARGET) 

Country not available

 

- **Retryable:** No
- **HTTP status code:** `422`
- **Documentation:**
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::COUNTRY_NOT_AVAILABLE_FOR_TARGET)
 
 

 

 

 

####  [ ERR::SCRAPE::DNS\_NAME\_NOT\_RESOLVED  ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::DNS_NAME_NOT_RESOLVED) 

The DNS of the targeted website is not resolving or not responding

 

- **Retryable:** No
- **HTTP status code:** `422`
- **Documentation:**
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::DNS_NAME_NOT_RESOLVED)
 
 

 

 

 

####  [ ERR::SCRAPE::DOMAIN\_NOT\_ALLOWED  ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::DOMAIN_NOT_ALLOWED) 

The Domain targeted is not allowed or restricted

 

- **Retryable:** No
- **HTTP status code:** `422`
- **Documentation:**
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::DOMAIN_NOT_ALLOWED)
 
 

 

 

 

####  [ ERR::SCRAPE::DOM\_SELECTOR\_INVALID  ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::DOM_SELECTOR_INVALID) 

The DOM Selector is invalid

 

- **Retryable:** No
- **HTTP status code:** `422`
- **Documentation:**
    - [Javascript Documentation](https://scrapfly.io/docs/scrape-api/javascript-rendering)
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::DOM_SELECTOR_INVALID)
 
 

 

 

 

####  [ ERR::SCRAPE::DOM\_SELECTOR\_INVISIBLE  ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::DOM_SELECTOR_INVISIBLE) 

The requested DOM selected is invisible (Mostly issued when element is targeted for screenshot)

 

- **Retryable:** No
- **HTTP status code:** `422`
- **Documentation:**
    - [Javascript Documentation](https://scrapfly.io/docs/scrape-api/javascript-rendering)
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::DOM_SELECTOR_INVISIBLE)
 
 

 

 

 

####  [ ERR::SCRAPE::DOM\_SELECTOR\_NOT\_FOUND  ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::DOM_SELECTOR_NOT_FOUND) 

The requested DOM selected was not found in rendered content within 15s

 

- **Retryable:** No
- **HTTP status code:** `422`
- **Documentation:**
    - [Javascript Documentation](https://scrapfly.io/docs/scrape-api/javascript-rendering)
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::DOM_SELECTOR_NOT_FOUND)
 
 

 

 

 

####  [ ERR::SCRAPE::DRIVER\_CRASHED  ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::DRIVER_CRASHED) 

Driver used to perform the scrape can crash for many reason

 

- **Retryable:** Yes
- **HTTP status code:** `422`
- **Documentation:**
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::DRIVER_CRASHED)
 
 

 

 

 

####  [ ERR::SCRAPE::DRIVER\_INSUFFICIENT\_RESOURCES  ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::DRIVER_INSUFFICIENT_RESOURCES) 

Driver do not have enough resource to render the page correctly

 

- **Retryable:** Yes
- **HTTP status code:** `422`
- **Documentation:**
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::DRIVEDRIVER_INSUFFICIENT_RESOURCES)
 
 

 

 

 

####  [ ERR::SCRAPE::DRIVER\_TIMEOUT  ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::DRIVER_TIMEOUT) 

Driver timeout - No response received

 

- **Retryable:** Yes
- **HTTP status code:** `422`
- **Documentation:**
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::DRIVER_TIMEOUT)
 
 

 

 

 

####  [ ERR::SCRAPE::FORMAT\_CONVERSION\_ERROR  ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::FORMAT_CONVERSION_ERROR) 

Response format conversion failed, unsupported input content type

 

- **Retryable:** No
- **HTTP status code:** `422`
- **Documentation:**
    - [API Format Parameter](https://scrapfly.io/docs/scrape-api/getting-started#api_param_format)
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::FORMAT_CONVERSION_ERROR)
 
 

 

 

 

####  [ ERR::SCRAPE::JAVASCRIPT\_EXECUTION  ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::JAVASCRIPT_EXECUTION) 

The javascript to execute goes wrong, please read the associated message to figure out the problem

 

- **Retryable:** No
- **HTTP status code:** `422`
- **Documentation:**
    - [Checkout Javascript Rendering Documentation](https://scrapfly.io/docs/scrape-api/javascript-rendering)
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::JAVASCRIPT_EXECUTION)
 
 

 

 

 

####  [ ERR::SCRAPE::NETWORK\_ERROR  ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::NETWORK_ERROR) 

Network error happened between Scrapfly server and remote server

 

- **Retryable:** Yes
- **HTTP status code:** `422`
- **Documentation:**
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::NETWORK_ERROR)
 
 

 

 

 

####  [ ERR::SCRAPE::NETWORK\_SERVER\_DISCONNECTED  ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::NETWORK_SERVER_DISCONNECTED) 

Server of upstream website closed unexpectedly the connection

 

- **Retryable:** No
- **HTTP status code:** `422`
- **Documentation:**
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::NETWORK_SERVER_DISCONNECTED)
 
 

 

 

 

####  [ ERR::SCRAPE::NO\_BROWSER\_AVAILABLE  ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::NO_BROWSER_AVAILABLE) 

No browser available in the pool

 

- **Retryable:** Yes
- **HTTP status code:** `422`
- **Documentation:**
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::NO_BROWSER_AVAILABLE)
 
 

 

 

 

####  [ ERR::SCRAPE::OPERATION\_TIMEOUT  ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::OPERATION_TIMEOUT) 

This is a generic error for when timeout occur. It happened when internal operation took too much time

 

- **Retryable:** Yes
- **HTTP status code:** `504`
- **Documentation:**
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::OPERATION_TIMEOUT)
    - [Timeout Documentation](https://scrapfly.io/docs/scrape-api/understand-timeout)
 
 

 

 

 

####  [ ERR::SCRAPE::PLATFORM\_NOT\_AVAILABLE\_FOR\_TARGET  ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::PLATFORM_NOT_AVAILABLE_FOR_TARGET) 

Platform not available

 

- **Retryable:** No
- **HTTP status code:** `422`
- **Documentation:**
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::PLATFORM_NOT_AVAILABLE_FOR_TARGET)
 
 

 

 

 

####  [ ERR::SCRAPE::TIMEOUT\_TOO\_LOW\_FOR\_TARGET  ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::TIMEOUT_TOO_LOW_FOR_TARGET) 

Timeout too low for target

 

- **Retryable:** No
- **HTTP status code:** `422`
- **Documentation:**
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::TIMEOUT_TOO_LOW_FOR_TARGET)
 
 

 

 

 

####  [ ERR::SCRAPE::PROJECT\_QUOTA\_LIMIT\_REACHED  ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::PROJECT_QUOTA_LIMIT_REACHED) 

The limit set to the current project has been reached

 

- **Retryable:** Yes
- **HTTP status code:** `429`
- **Documentation:**
    - [Project Documentation](https://scrapfly.io/docs/project)
    - [Quota Pricing](https://scrapfly.io/pricing)
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::PROJECT_QUOTA_LIMIT_REACHED)
 
 

 

 

 

####  [ ERR::SCRAPE::QUOTA\_LIMIT\_REACHED  ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::QUOTA_LIMIT_REACHED) 

You reach your scrape quota plan for the month. You can upgrade your plan if you want increase the quota

 

- **Retryable:** No
- **HTTP status code:** `429`
- **Documentation:**
    - [Project Quota And Usage](https://scrapfly.io/docs/project)
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::QUOTA_LIMIT_REACHED)
    - [Upgrade you subscription](https://scrapfly.io/docs/billing#change_plan)
 
 

 

 

 

####  [ ERR::SCRAPE::SCENARIO\_DEADLINE\_OVERFLOW  ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::SCENARIO_DEADLINE_OVERFLOW) 

Submitted scenario would require more than 30s to complete

 

- **Retryable:** No
- **HTTP status code:** `422`
- **Documentation:**
    - [Javascript Scenario Documentation](https://scrapfly.io/docs/scrape-api/javascript-scenario)
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::SCENARIO_DEADLINE_OVERFLOW)
    - [Timeout Documentation](https://scrapfly.io/docs/scrape-api/understand-timeout)
 
 

 

 

 

####  [ ERR::SCRAPE::SCENARIO\_EXECUTION  ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::SCENARIO_EXECUTION) 

Javascript Scenario Failed

 

- **Retryable:** No
- **HTTP status code:** `422`
- **Documentation:**
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::SCENARIO_EXECUTION)
 
 

 

 

 

####  [ ERR::SCRAPE::SCENARIO\_TIMEOUT  ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::SCENARIO_TIMEOUT) 

Javascript Scenario Timeout

 

- **Retryable:** Yes
- **HTTP status code:** `422`
- **Documentation:**
    - [Javascript Scenario Documentation](https://scrapfly.io/docs/scrape-api/javascript-scenario)
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::SCENARIO_EXECUTION)
    - [Timeout Documentation](https://scrapfly.io/docs/scrape-api/understand-timeout)
 
 

 

 

 

####  [ ERR::SCRAPE::SSL\_ERROR  ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::SSL_ERROR) 

Upstream website have SSL error

 

- **Retryable:** No
- **HTTP status code:** `422`
- **Documentation:**
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::SSL_ERROR)
 
 

 

 

 

####  [ ERR::SCRAPE::TOO\_MANY\_CONCURRENT\_REQUEST  ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::TOO_MANY_CONCURRENT_REQUEST) 

You reach concurrent limit of scrape request of your current plan or project if you set a concurrent limit at project level

 

- **Retryable:** Yes
- **HTTP status code:** `429`
- **Documentation:**
    - [Quota Pricing](https://scrapfly.io/pricing)
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::TOO_MANY_CONCURRENT_REQUEST)
 
 

 

 

 

####  [ ERR::SCRAPE::UNABLE\_TO\_TAKE\_SCREENSHOT  ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::UNABLE_TO_TAKE_SCREENSHOT) 

Unable to take screenshot

 

- **Retryable:** Yes
- **HTTP status code:** `422`
- **Documentation:**
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::UNABLE_TO_TAKE_SCREENSHOT)
 
 

 

 

 

####  [ ERR::SCRAPE::UPSTREAM\_TIMEOUT  ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::UPSTREAM_TIMEOUT) 

The website you target made too much time to response

 

- **Retryable:** No
- **HTTP status code:** `422`
- **Documentation:**
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::UPSTREAM_TIMEOUT)
 
 

 

 

 

####  [ ERR::SCRAPE::UPSTREAM\_WEBSITE\_ERROR  ](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::UPSTREAM_WEBSITE_ERROR) 

The website you tried to scrape have configuration or malformed response

 

- **Retryable:** Yes
- **HTTP status code:** `422`
- **Documentation:**
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::UPSTREAM_WEBSITE_ERROR)
 
 

 

 

 

### Proxy Errors

####  [ ERR::PROXY::POOL\_NOT\_AVAILABLE\_FOR\_TARGET  ](https://scrapfly.io/docs/scrape-api/error/ERR::PROXY::POOL_NOT_AVAILABLE_FOR_TARGET) 

The desired proxy pool is not available for the given domain - mostly well known protected domain which require at least residential networks

 

- **Retryable:** No
- **HTTP status code:** `422`
- **Documentation:**
    - [API Usage](https://scrapfly.io/docs/scrape-api/getting-started#api_param_proxy_pool)
    - [Proxy Documentation](https://scrapfly.io/docs/scrape-api/proxy)
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::PROXY::POOL_NOT_AVAILABLE_FOR_TARGET)
 
 

 

 

 

####  [ ERR::PROXY::POOL\_NOT\_FOUND  ](https://scrapfly.io/docs/scrape-api/error/ERR::PROXY::POOL_NOT_FOUND) 

Provided Proxy Pool Name do not exists

 

- **Retryable:** No
- **HTTP status code:** `400`
- **Documentation:**
    - [API Usage](https://scrapfly.io/docs/scrape-api/getting-started#api_param_proxy_pool)
    - [Proxy Documentation](https://scrapfly.io/docs/scrape-api/proxy)
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::PROXY::POOL_NOT_FOUND)
 
 

 

 

 

####  [ ERR::PROXY::POOL\_UNAVAILABLE\_COUNTRY  ](https://scrapfly.io/docs/scrape-api/error/ERR::PROXY::POOL_UNAVAILABLE_COUNTRY) 

Country not available for given proxy pool

 

- **Retryable:** No
- **HTTP status code:** `400`
- **Documentation:**
    - [API Usage](https://scrapfly.io/docs/scrape-api/getting-started#api_param_proxy_pool)
    - [Proxy Documentation](https://scrapfly.io/docs/scrape-api/proxy)
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::PROXY::POOL_UNAVAILABLE_COUNTRY)
 
 

 

 

 

####  [ ERR::PROXY::RESOURCES\_SATURATION  ](https://scrapfly.io/docs/scrape-api/error/ERR::PROXY::RESOURCES_SATURATION) 

Proxy are saturated for the desired country, you can try on other countries. They will come back as soon as possible

 

- **Retryable:** Yes
- **HTTP status code:** `422`
- **Documentation:**
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::PROXY::RESOURCES_SATURATION)
 
 

 

 

 

####  [ ERR::PROXY::TIMEOUT  ](https://scrapfly.io/docs/scrape-api/error/ERR::PROXY::TIMEOUT) 

Proxy connection or website was too slow and timeout

 

- **Retryable:** Yes
- **HTTP status code:** `422`
- **Documentation:**
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::PROXY::TIMEOUT)
    - [Timeout Documentation](https://scrapfly.io/docs/scrape-api/understand-timeout)
 
 

 

 

 

####  [ ERR::PROXY::UNAVAILABLE  ](https://scrapfly.io/docs/scrape-api/error/ERR::PROXY::UNAVAILABLE) 

Proxy is unavailable - The domain (mainly gov website) is restricted, You are using session feature and the proxy is unreachable at the moment

 

- **Retryable:** Yes
- **HTTP status code:** `422`
- **Documentation:**
    - [API Usage](https://scrapfly.io/docs/scrape-api/getting-started#api_param_proxy_pool)
    - [Proxy Documentation](https://scrapfly.io/docs/scrape-api/proxy)
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::PROXY::UNAVAILABLE)
 
 

 

 

 

### Throttle Errors

####  [ ERR::THROTTLE::MAX\_API\_CREDIT\_BUDGET\_EXCEEDED  ](https://scrapfly.io/docs/scrape-api/error/ERR::THROTTLE::MAX_API_CREDIT_BUDGET_EXCEEDED) 

Your scrape request has been throttled. API Credit Budget reached. If it's not expected, please check your throttle configuration for the given project and env.

 

- **Retryable:** Yes
- **HTTP status code:** `429`
- **Documentation:**
    - [API Documentation](https://scrapfly.io/docs/scrape-api/getting-started#api_param_cost_budget)
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::THROTTLE::MAX_API_CREDIT_BUDGET_EXCEEDED)
 
 

 

 

 

####  [ ERR::THROTTLE::MAX\_CONCURRENT\_REQUEST\_EXCEEDED  ](https://scrapfly.io/docs/scrape-api/error/ERR::THROTTLE::MAX_CONCURRENT_REQUEST_EXCEEDED) 

Your scrape request has been throttled. Too many concurrent access to the upstream. If it's not expected, please check your throttle configuration for the given project and env.

 

- **Retryable:** Yes
- **HTTP status code:** `429`
- **Documentation:**
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::THROTTLE::MAX_CONCURRENT_REQUEST_EXCEEDED)
    - [Throttler Documentation](https://scrapfly.io/docs/throttling)
 
 

 

 

 

####  [ ERR::THROTTLE::MAX\_REQUEST\_RATE\_EXCEEDED  ](https://scrapfly.io/docs/scrape-api/error/ERR::THROTTLE::MAX_REQUEST_RATE_EXCEEDED) 

Your scrape request as been throttle. Too much request during the 1m window. If it's not expected, please check your throttle configuration for the given project and env

 

- **Retryable:** Yes
- **HTTP status code:** `429`
- **Documentation:**
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::THROTTLE::MAX_REQUEST_RATE_EXCEEDED)
    - [Throttler Documentation](https://scrapfly.io/docs/throttling)
 
 

 

 

 

### Anti Scraping Protection (ASP) Errors

####  [ ERR::ASP::CAPTCHA\_ERROR  ](https://scrapfly.io/docs/scrape-api/error/ERR::ASP::CAPTCHA_ERROR) 

Something wrong happened with the captcha. We will figure out to fix the problem as soon as possible

 

- **Retryable:** Yes
- **HTTP status code:** `422`
- **Documentation:**
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::ASP::CAPTCHA_ERROR)
 
 

 

 

 

####  [ ERR::ASP::CAPTCHA\_TIMEOUT  ](https://scrapfly.io/docs/scrape-api/error/ERR::ASP::CAPTCHA_TIMEOUT) 

The budgeted time to solve the captcha is reached

 

- **Retryable:** Yes
- **HTTP status code:** `422`
- **Documentation:**
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::ASP::CAPTCHA_TIMEOUT)
 
 

 

 

 

####  [ ERR::ASP::SHIELD\_ERROR  ](https://scrapfly.io/docs/scrape-api/error/ERR::ASP::SHIELD_ERROR) 

The ASP encounter an unexpected problem. We will fix it as soon as possible. Our team has been alerted

 

- **Retryable:** No
- **HTTP status code:** `422`
- **Documentation:**
    - [Checkout ASP documentation](https://scrapfly.io/docs/scrape-api/anti-scraping-protection#maximize_success_rate)
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::ASP::SHIELD_ERROR)
 
 

 

 

 

####  [ ERR::ASP::SHIELD\_EXPIRED  ](https://scrapfly.io/docs/scrape-api/error/ERR::ASP::SHIELD_EXPIRED) 

The ASP shield previously set is expired, you must retry.

 

- **Retryable:** Yes
- **HTTP status code:** `422`
 
 

 

 

 

####  [ ERR::ASP::SHIELD\_NOT\_ELIGIBLE  ](https://scrapfly.io/docs/scrape-api/error/ERR::ASP::SHIELD_NOT_ELIGIBLE) 

The feature requested is not eligible while using the ASP for the given protection/target

 

- **Retryable:** No
- **HTTP status code:** `422`
- **Documentation:**
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::ASP::SHIELD_NOT_ELIGIBLE)
 
 

 

 

 

####  [ ERR::ASP::SHIELD\_PROTECTION\_FAILED  ](https://scrapfly.io/docs/scrape-api/error/ERR::ASP::SHIELD_PROTECTION_FAILED) 

The ASP shield failed to solve the challenge against the anti scrapping protection

 

- **Retryable:** Yes
- **HTTP status code:** `422`
- **Documentation:**
    - [Checkout ASP documentation](https://scrapfly.io/docs/scrape-api/anti-scraping-protection#maximize_success_rate)
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::ASP::SHIELD_PROTECTION_FAILED)
 
 

 

 

 

####  [ ERR::ASP::TIMEOUT  ](https://scrapfly.io/docs/scrape-api/error/ERR::ASP::TIMEOUT) 

The ASP made too much time to solve or respond

 

- **Retryable:** Yes
- **HTTP status code:** `422`
- **Documentation:**
    - [Checkout ASP documentation](https://scrapfly.io/docs/scrape-api/anti-scraping-protection)
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::ASP::TIMEOUT)
 
 

 

 

 

####  [ ERR::ASP::UNABLE\_TO\_SOLVE\_CAPTCHA  ](https://scrapfly.io/docs/scrape-api/error/ERR::ASP::UNABLE_TO_SOLVE_CAPTCHA) 

Despite our effort, we were unable to solve the captcha. It can happened sporadically, please retry

 

- **Retryable:** Yes
- **HTTP status code:** `422`
- **Documentation:**
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::ASP::UNABLE_TO_SOLVE_CAPTCHA)
 
 

 

 

 

####  [ ERR::ASP::UPSTREAM\_UNEXPECTED\_RESPONSE  ](https://scrapfly.io/docs/scrape-api/error/ERR::ASP::UPSTREAM_UNEXPECTED_RESPONSE) 

The response given by the upstream after challenge resolution is not expected. Our team has been alerted

 

- **Retryable:** No
- **HTTP status code:** `422`
- **Documentation:**
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::ASP::UPSTREAM_UNEXPECTED_RESPONSE)
 
 

 

 

 

### Webhook Errors

####  [ ERR::WEBHOOK::DISABLED  ](https://scrapfly.io/docs/scrape-api/error/ERR::WEBHOOK::DISABLED) 

Given webhook is disabled, please check out your webhook configuration for the current project / env

 

- **Retryable:** No
- **HTTP status code:** `400`
- **Documentation:**
    - [Checkout Webhook Documentation](https://scrapfly.io/docs/scrape-api/webhook)
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::WEBHOOK::DISABLED)
 
 

 

 

 

####  [ ERR::WEBHOOK::ENDPOINT\_UNREACHABLE  ](https://scrapfly.io/docs/scrape-api/error/ERR::WEBHOOK::ENDPOINT_UNREACHABLE) 

We were not able to contact your endpoint

 

- **Retryable:** Yes
- **HTTP status code:** `422`
- **Documentation:**
    - [Checkout Webhook Documentation](https://scrapfly.io/docs/scrape-api/webhook)
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::WEBHOOK::ENDPOINT_UNREACHABLE)
 
 

 

 

 

####  [ ERR::WEBHOOK::QUEUE\_FULL  ](https://scrapfly.io/docs/scrape-api/error/ERR::WEBHOOK::QUEUE_FULL) 

You reach the maximum concurrency limit

 

- **Retryable:** Yes
- **HTTP status code:** `429`
- **Documentation:**
    - [Checkout Webhook Documentation](https://scrapfly.io/docs/scrape-api/webhook)
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::WEBHOOK::MAX_CONCURRENCY_REACHED)
 
 

 

 

 

####  [ ERR::WEBHOOK::MAX\_RETRY  ](https://scrapfly.io/docs/scrape-api/error/ERR::WEBHOOK::MAX_RETRY) 

Maximum retry exceeded on your webhook

 

- **Retryable:** No
- **HTTP status code:** `429`
- **Documentation:**
    - [Checkout Webhook Documentation](https://scrapfly.io/docs/scrape-api/webhook)
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::WEBHOOK::MAX_RETRY)
 
 

 

 

 

####  [ ERR::WEBHOOK::NOT\_FOUND  ](https://scrapfly.io/docs/scrape-api/error/ERR::WEBHOOK::NOT_FOUND) 

Unable to find the given webhook for the current project / env

 

- **Retryable:** No
- **HTTP status code:** `400`
- **Documentation:**
    - [Checkout Webhook Documentation](https://scrapfly.io/docs/scrape-api/webhook)
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::WEBHOOK::NOT_FOUND)
 
 

 

 

 

####  [ ERR::WEBHOOK::QUEUE\_FULL  ](https://scrapfly.io/docs/scrape-api/error/ERR::WEBHOOK::QUEUE_FULL) 

You reach the limit of scheduled webhook - You must wait pending webhook are processed

 

- **Retryable:** Yes
- **HTTP status code:** `429`
- **Documentation:**
    - [Checkout Webhook Documentation](https://scrapfly.io/docs/scrape-api/webhook)
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::WEBHOOK::QUEUE_FULL)
 
 

 

 

 

### Session Errors

####  [ ERR::SESSION::CONCURRENT\_ACCESS  ](https://scrapfly.io/docs/scrape-api/error/ERR::SESSION::CONCURRENT_ACCESS) 

Concurrent access to the session has been tried. If your spider run on distributed architecture, the same session name is currently used by another scrape

 

- **Retryable:** Yes
- **HTTP status code:** `429`
- **Documentation:**
    - [Checkout Session Documentation](https://scrapfly.io/docs/scrape-api/session)
    - [Related Error Doc](https://scrapfly.io/docs/scrape-api/error/ERR::SESSION::CONCURRENT_ACCESS)
 
 

 

 

 

 For complete details on each inherited error, see the [Web Scraping API Error Reference](https://scrapfly.io/docs/scrape-api/errors).

## HTTP Status Codes 

 | Status Code | Description |
|---|---|
| `200 OK` | Request successful |
| `201 Created` | Crawler job created successfully |
| `400 Bad Request` | Invalid parameters or configuration |
| `401 Unauthorized` | Invalid or missing API key |
| `403 Forbidden` | API key doesn't have permission for this operation |
| `404 Not Found` | Crawler job UUID not found |
| `422 Request Failed` | Request was valid but execution failed |
| `429 Too Many Requests` | Rate limit or concurrency limit exceeded |
| `500 Server Error` | Internal server error |
| `504 Timeout` | Request timed out |

## Error Response Format 

All error responses include detailed information in a consistent format:

 ```
{
  "error": {
    "code": "CRAWLER_TIMEOUT",
    "message": "Crawler exceeded maximum duration of 3600 seconds",
    "retryable": false,
    "details": {
      "max_duration": 3600,
      "elapsed_duration": 3615,
      "urls_visited": 847
    }
  }
}
```

 

   

 

 

 

**Error response headers:**

- `X-Scrapfly-Error-Code` - Machine-readable error code
- `X-Scrapfly-Error-Message` - Human-readable error description
- `X-Scrapfly-Error-Retryable` - Whether the operation can be retried
 
## Related Documentation 

- [Web Scraping API Errors (Complete List)](https://scrapfly.io/docs/scrape-api/errors)
- [Crawler API Getting Started](https://scrapfly.io/docs/crawler-api/getting-started)
- [Contact Support](https://scrapfly.io/docs/support)