API Errors

Introduction

If you want to port those definitions into your application, you can checkout the Exportable Definition section to retrieve the JSON describing errors.

Generic API Error

Example of API error response :

{
    status: "error",
    http_code: 401,
    reason: "Unauthorized",
    error_id: "301e2d9e-b4f5-4289-85ea-e452143338df",
    message: "Invalid API key"
}

HTTP 400

Bad Request: Parameters sent to the API are incorrect. Check out the related documentation to figure out the error.

HTTP 404

Not Found: API URL might have a typo or be incorrect. Check out the related documentation to figure out the error.

HTTP 422

Unprocessable Entity: Your request was correct, but for some reason, we cannot handle it. Most of the time, the entity which you want to update/delete has already been processed.

HTTP 429

Too Many Request: API endpoint, which should be called with a high frequency, are throttled internally to prevent service disruption. If this happened too many times, your account would be suspended.

HTTP 500

Internal Server Error: Scrapfly is in trouble, and we have been alerted of the issue.

HTTP 502

Web service that exposes Scrapfly to the internet is in trouble, and we have been alerted of the issue.

HTTP 503

Service Temporary Unavailable: Scrapfly might run in degraded mode, or maintenance was scheduled to upgrade our service.

HTTP 504

Scrapfly is not reachable or takes too much time to respond.

Web Scraping API Errors

Error is present in response body response['result']['error'].

{
    "config": { ... },
    "context": { ... },
    "result": {
        [...],
        "status": "DONE",
        "success": false,
        "reason": null,
        "error": {
            "http_code": 429,
            "code": "ERR::THROTTLE::MAX_REQUEST_RATE_EXCEEDED",
            "description": "Your scrape request as been throttle. Too much request during the last minute. If it's not expected, please check your throttle configuration for the given project and env",
            "error_id": "9993a546-b899-4927-b788-04f5c4e473d5",
            "message": "Max request rate exceeded",
            "scrape_id": "7c61352c-f1a7-4ea6-a0b8-198d7ac6fe1a",
            "retryable": false,
            "doc_url": "https://scrapfly.io/docs/scrape-api/error/ERR::THROTTLE::MAX_REQUEST_RATE_EXCEEDED"
        },
        [...],
    }
}

Error is also present in response headers

x-scrapfly-reject-code: ERR::SCRAPE::DNS_NAME_NOT_RESOLVED
x-scrapfly-reject-description: The DNS of upstream website is not resolving or not responding
x-scrapfly-reject-doc: https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::DNS_NAME_NOT_RESOLVED
x-scrapfly-reject-http-code: 523
x-scrapfly-reject-id: 5556636c-ac89-417f-b645-02c32905e39a
x-scrapfly-reject-retryable: no

ERR::ACCOUNT::PAYMENT_REQUIRED

Unable to charge last invoice - Connect to your dashboard to solve the issue


ERR::ACCOUNT::SUSPENDED

Account Suspended


ERR::ASP::CAPTCHA_TIMEOUT

The budgeted time to solve the captcha is reached


ERR::ASP::SHIELD_EXPIRED

The ASP shield previously set is expired, you must retry.

  • Retryable: Yes
  • HTTP status code: 422

ERR::ASP::SHIELD_PROTECTION_FAILED

The ASP shield failed to solve the challenge against the anti scrapping protection


ERR::ASP::TIMEOUT

The ASP made too much time to solve or respond


ERR::ASP::UNABLE_TO_SOLVE_CAPTCHA

Despite our effort, we were unable to solve the captcha. It can happened sporadically, please retry


ERR::PROXY::NOT_REACHABLE

Proxy was not reachable, it can happened when network issue or proxy itself is in trouble


ERR::PROXY::RESOURCES_SATURATION

Proxy are saturated for the desired country, you can try on other countries. They will come back as soon as possible


ERR::PROXY::TIMEOUT

Proxy do not respond in the given time or was too slow - We have removed it from the pool


ERR::PROXY::UNAVAILABLE

Proxy is unavailable - No proxy available for the target (can be restricted for some website)


ERR::SCRAPE::DRIVER_CRASHED

Driver used to perform the scrape can crash for many reason


ERR::SCRAPE::DRIVER_TIMEOUT

Driver timeout - No response received


ERR::SCRAPE::NETWORK_ERROR

Network error happened between Scrapfly server and remote server


ERR::SCRAPE::NO_BROWSER_AVAILABLE

No browser available in the pool


ERR::SCRAPE::OPERATION_TIMEOUT

This is a generic error for when timeout occur. It happened when internal operation took too much time


ERR::SCRAPE::PROJECT_QUOTA_LIMIT_REACHED

The limit set to the current project has been reached


ERR::SCRAPE::SCENARIO_TIMEOUT

Javascript Scenario Timeout


ERR::SCRAPE::TOO_MANY_CONCURRENT_REQUEST

Account concurrency limit reached - Max concurrency request allowed by your current plan: 5


ERR::SCRAPE::UNABLE_TO_TAKE_SCREENSHOT

Unable to take screenshot, happened when renderer encounter bad formatted code


ERR::SCRAPE::UPSTREAM_WEBSITE_ERROR

Upstream website you scrape had issue


ERR::SESSION::CONCURRENT_ACCESS

Concurrent access to the session has been tried. If your spider run on distributed architecture, check if the correlation id is correctly configured


ERR::THROTTLE::MAX_CONCURRENT_REQUEST_EXCEEDED

Your scrape request has been throttled. Too many concurrent access to the upstream. If it's not expected, please check your throttle configuration for the given project and env.


ERR::THROTTLE::MAX_REQUEST_RATE_EXCEEDED

Your scrape request as been throttle. Too much request during the 1m window. If it's not expected, please check your throttle configuration for the given project and env


ERR::WEBHOOK::QUEUE_FULL

You reach the limit of scheduled webhook - You must wait pending webhook are processed


Proxy

ERR::ACCOUNT::PAYMENT_REQUIRED

Unable to charge last invoice - Connect to your dashboard to solve the issue


ERR::ACCOUNT::SUSPENDED

Account Suspended


ERR::ASP::CAPTCHA_TIMEOUT

The budgeted time to solve the captcha is reached


ERR::ASP::SHIELD_EXPIRED

The ASP shield previously set is expired, you must retry.

  • Retryable: Yes
  • HTTP status code: 422

ERR::ASP::SHIELD_PROTECTION_FAILED

The ASP shield failed to solve the challenge against the anti scrapping protection


ERR::ASP::TIMEOUT

The ASP made too much time to solve or respond


ERR::ASP::UNABLE_TO_SOLVE_CAPTCHA

Despite our effort, we were unable to solve the captcha. It can happened sporadically, please retry


ERR::PROXY::NOT_REACHABLE

Proxy was not reachable, it can happened when network issue or proxy itself is in trouble


ERR::PROXY::RESOURCES_SATURATION

Proxy are saturated for the desired country, you can try on other countries. They will come back as soon as possible


ERR::PROXY::TIMEOUT

Proxy do not respond in the given time or was too slow - We have removed it from the pool


ERR::PROXY::UNAVAILABLE

Proxy is unavailable - No proxy available for the target (can be restricted for some website)


ERR::SCRAPE::DRIVER_CRASHED

Driver used to perform the scrape can crash for many reason


ERR::SCRAPE::DRIVER_TIMEOUT

Driver timeout - No response received


ERR::SCRAPE::NETWORK_ERROR

Network error happened between Scrapfly server and remote server


ERR::SCRAPE::NO_BROWSER_AVAILABLE

No browser available in the pool


ERR::SCRAPE::OPERATION_TIMEOUT

This is a generic error for when timeout occur. It happened when internal operation took too much time


ERR::SCRAPE::PROJECT_QUOTA_LIMIT_REACHED

The limit set to the current project has been reached


ERR::SCRAPE::SCENARIO_TIMEOUT

Javascript Scenario Timeout


ERR::SCRAPE::TOO_MANY_CONCURRENT_REQUEST

Account concurrency limit reached - Max concurrency request allowed by your current plan: 5


ERR::SCRAPE::UNABLE_TO_TAKE_SCREENSHOT

Unable to take screenshot, happened when renderer encounter bad formatted code


ERR::SCRAPE::UPSTREAM_WEBSITE_ERROR

Upstream website you scrape had issue


ERR::SESSION::CONCURRENT_ACCESS

Concurrent access to the session has been tried. If your spider run on distributed architecture, check if the correlation id is correctly configured


ERR::THROTTLE::MAX_CONCURRENT_REQUEST_EXCEEDED

Your scrape request has been throttled. Too many concurrent access to the upstream. If it's not expected, please check your throttle configuration for the given project and env.


ERR::THROTTLE::MAX_REQUEST_RATE_EXCEEDED

Your scrape request as been throttle. Too much request during the 1m window. If it's not expected, please check your throttle configuration for the given project and env


ERR::WEBHOOK::QUEUE_FULL

You reach the limit of scheduled webhook - You must wait pending webhook are processed


Throttle

ERR::ACCOUNT::PAYMENT_REQUIRED

Unable to charge last invoice - Connect to your dashboard to solve the issue


ERR::ACCOUNT::SUSPENDED

Account Suspended


ERR::ASP::CAPTCHA_TIMEOUT

The budgeted time to solve the captcha is reached


ERR::ASP::SHIELD_EXPIRED

The ASP shield previously set is expired, you must retry.

  • Retryable: Yes
  • HTTP status code: 422

ERR::ASP::SHIELD_PROTECTION_FAILED

The ASP shield failed to solve the challenge against the anti scrapping protection


ERR::ASP::TIMEOUT

The ASP made too much time to solve or respond


ERR::ASP::UNABLE_TO_SOLVE_CAPTCHA

Despite our effort, we were unable to solve the captcha. It can happened sporadically, please retry


ERR::PROXY::NOT_REACHABLE

Proxy was not reachable, it can happened when network issue or proxy itself is in trouble


ERR::PROXY::RESOURCES_SATURATION

Proxy are saturated for the desired country, you can try on other countries. They will come back as soon as possible


ERR::PROXY::TIMEOUT

Proxy do not respond in the given time or was too slow - We have removed it from the pool


ERR::PROXY::UNAVAILABLE

Proxy is unavailable - No proxy available for the target (can be restricted for some website)


ERR::SCRAPE::DRIVER_CRASHED

Driver used to perform the scrape can crash for many reason


ERR::SCRAPE::DRIVER_TIMEOUT

Driver timeout - No response received


ERR::SCRAPE::NETWORK_ERROR

Network error happened between Scrapfly server and remote server


ERR::SCRAPE::NO_BROWSER_AVAILABLE

No browser available in the pool


ERR::SCRAPE::OPERATION_TIMEOUT

This is a generic error for when timeout occur. It happened when internal operation took too much time


ERR::SCRAPE::PROJECT_QUOTA_LIMIT_REACHED

The limit set to the current project has been reached


ERR::SCRAPE::SCENARIO_TIMEOUT

Javascript Scenario Timeout


ERR::SCRAPE::TOO_MANY_CONCURRENT_REQUEST

Account concurrency limit reached - Max concurrency request allowed by your current plan: 5


ERR::SCRAPE::UNABLE_TO_TAKE_SCREENSHOT

Unable to take screenshot, happened when renderer encounter bad formatted code


ERR::SCRAPE::UPSTREAM_WEBSITE_ERROR

Upstream website you scrape had issue


ERR::SESSION::CONCURRENT_ACCESS

Concurrent access to the session has been tried. If your spider run on distributed architecture, check if the correlation id is correctly configured


ERR::THROTTLE::MAX_CONCURRENT_REQUEST_EXCEEDED

Your scrape request has been throttled. Too many concurrent access to the upstream. If it's not expected, please check your throttle configuration for the given project and env.


ERR::THROTTLE::MAX_REQUEST_RATE_EXCEEDED

Your scrape request as been throttle. Too much request during the 1m window. If it's not expected, please check your throttle configuration for the given project and env


ERR::WEBHOOK::QUEUE_FULL

You reach the limit of scheduled webhook - You must wait pending webhook are processed


Anti Scraping Protection (ASP)

ERR::ACCOUNT::PAYMENT_REQUIRED

Unable to charge last invoice - Connect to your dashboard to solve the issue


ERR::ACCOUNT::SUSPENDED

Account Suspended


ERR::ASP::CAPTCHA_TIMEOUT

The budgeted time to solve the captcha is reached


ERR::ASP::SHIELD_EXPIRED

The ASP shield previously set is expired, you must retry.

  • Retryable: Yes
  • HTTP status code: 422

ERR::ASP::SHIELD_PROTECTION_FAILED

The ASP shield failed to solve the challenge against the anti scrapping protection


ERR::ASP::TIMEOUT

The ASP made too much time to solve or respond


ERR::ASP::UNABLE_TO_SOLVE_CAPTCHA

Despite our effort, we were unable to solve the captcha. It can happened sporadically, please retry


ERR::PROXY::NOT_REACHABLE

Proxy was not reachable, it can happened when network issue or proxy itself is in trouble


ERR::PROXY::RESOURCES_SATURATION

Proxy are saturated for the desired country, you can try on other countries. They will come back as soon as possible


ERR::PROXY::TIMEOUT

Proxy do not respond in the given time or was too slow - We have removed it from the pool


ERR::PROXY::UNAVAILABLE

Proxy is unavailable - No proxy available for the target (can be restricted for some website)


ERR::SCRAPE::DRIVER_CRASHED

Driver used to perform the scrape can crash for many reason


ERR::SCRAPE::DRIVER_TIMEOUT

Driver timeout - No response received


ERR::SCRAPE::NETWORK_ERROR

Network error happened between Scrapfly server and remote server


ERR::SCRAPE::NO_BROWSER_AVAILABLE

No browser available in the pool


ERR::SCRAPE::OPERATION_TIMEOUT

This is a generic error for when timeout occur. It happened when internal operation took too much time


ERR::SCRAPE::PROJECT_QUOTA_LIMIT_REACHED

The limit set to the current project has been reached


ERR::SCRAPE::SCENARIO_TIMEOUT

Javascript Scenario Timeout


ERR::SCRAPE::TOO_MANY_CONCURRENT_REQUEST

Account concurrency limit reached - Max concurrency request allowed by your current plan: 5


ERR::SCRAPE::UNABLE_TO_TAKE_SCREENSHOT

Unable to take screenshot, happened when renderer encounter bad formatted code


ERR::SCRAPE::UPSTREAM_WEBSITE_ERROR

Upstream website you scrape had issue


ERR::SESSION::CONCURRENT_ACCESS

Concurrent access to the session has been tried. If your spider run on distributed architecture, check if the correlation id is correctly configured


ERR::THROTTLE::MAX_CONCURRENT_REQUEST_EXCEEDED

Your scrape request has been throttled. Too many concurrent access to the upstream. If it's not expected, please check your throttle configuration for the given project and env.


ERR::THROTTLE::MAX_REQUEST_RATE_EXCEEDED

Your scrape request as been throttle. Too much request during the 1m window. If it's not expected, please check your throttle configuration for the given project and env


ERR::WEBHOOK::QUEUE_FULL

You reach the limit of scheduled webhook - You must wait pending webhook are processed


Schedule

ERR::ACCOUNT::PAYMENT_REQUIRED

Unable to charge last invoice - Connect to your dashboard to solve the issue


ERR::ACCOUNT::SUSPENDED

Account Suspended


ERR::ASP::CAPTCHA_TIMEOUT

The budgeted time to solve the captcha is reached


ERR::ASP::SHIELD_EXPIRED

The ASP shield previously set is expired, you must retry.

  • Retryable: Yes
  • HTTP status code: 422

ERR::ASP::SHIELD_PROTECTION_FAILED

The ASP shield failed to solve the challenge against the anti scrapping protection


ERR::ASP::TIMEOUT

The ASP made too much time to solve or respond


ERR::ASP::UNABLE_TO_SOLVE_CAPTCHA

Despite our effort, we were unable to solve the captcha. It can happened sporadically, please retry


ERR::PROXY::NOT_REACHABLE

Proxy was not reachable, it can happened when network issue or proxy itself is in trouble


ERR::PROXY::RESOURCES_SATURATION

Proxy are saturated for the desired country, you can try on other countries. They will come back as soon as possible


ERR::PROXY::TIMEOUT

Proxy do not respond in the given time or was too slow - We have removed it from the pool


ERR::PROXY::UNAVAILABLE

Proxy is unavailable - No proxy available for the target (can be restricted for some website)


ERR::SCRAPE::DRIVER_CRASHED

Driver used to perform the scrape can crash for many reason


ERR::SCRAPE::DRIVER_TIMEOUT

Driver timeout - No response received


ERR::SCRAPE::NETWORK_ERROR

Network error happened between Scrapfly server and remote server


ERR::SCRAPE::NO_BROWSER_AVAILABLE

No browser available in the pool


ERR::SCRAPE::OPERATION_TIMEOUT

This is a generic error for when timeout occur. It happened when internal operation took too much time


ERR::SCRAPE::PROJECT_QUOTA_LIMIT_REACHED

The limit set to the current project has been reached


ERR::SCRAPE::SCENARIO_TIMEOUT

Javascript Scenario Timeout


ERR::SCRAPE::TOO_MANY_CONCURRENT_REQUEST

Account concurrency limit reached - Max concurrency request allowed by your current plan: 5


ERR::SCRAPE::UNABLE_TO_TAKE_SCREENSHOT

Unable to take screenshot, happened when renderer encounter bad formatted code


ERR::SCRAPE::UPSTREAM_WEBSITE_ERROR

Upstream website you scrape had issue


ERR::SESSION::CONCURRENT_ACCESS

Concurrent access to the session has been tried. If your spider run on distributed architecture, check if the correlation id is correctly configured


ERR::THROTTLE::MAX_CONCURRENT_REQUEST_EXCEEDED

Your scrape request has been throttled. Too many concurrent access to the upstream. If it's not expected, please check your throttle configuration for the given project and env.


ERR::THROTTLE::MAX_REQUEST_RATE_EXCEEDED

Your scrape request as been throttle. Too much request during the 1m window. If it's not expected, please check your throttle configuration for the given project and env


ERR::WEBHOOK::QUEUE_FULL

You reach the limit of scheduled webhook - You must wait pending webhook are processed


Webhook

ERR::ACCOUNT::PAYMENT_REQUIRED

Unable to charge last invoice - Connect to your dashboard to solve the issue


ERR::ACCOUNT::SUSPENDED

Account Suspended


ERR::ASP::CAPTCHA_TIMEOUT

The budgeted time to solve the captcha is reached


ERR::ASP::SHIELD_EXPIRED

The ASP shield previously set is expired, you must retry.

  • Retryable: Yes
  • HTTP status code: 422

ERR::ASP::SHIELD_PROTECTION_FAILED

The ASP shield failed to solve the challenge against the anti scrapping protection


ERR::ASP::TIMEOUT

The ASP made too much time to solve or respond


ERR::ASP::UNABLE_TO_SOLVE_CAPTCHA

Despite our effort, we were unable to solve the captcha. It can happened sporadically, please retry


ERR::PROXY::NOT_REACHABLE

Proxy was not reachable, it can happened when network issue or proxy itself is in trouble


ERR::PROXY::RESOURCES_SATURATION

Proxy are saturated for the desired country, you can try on other countries. They will come back as soon as possible


ERR::PROXY::TIMEOUT

Proxy do not respond in the given time or was too slow - We have removed it from the pool


ERR::PROXY::UNAVAILABLE

Proxy is unavailable - No proxy available for the target (can be restricted for some website)


ERR::SCRAPE::DRIVER_CRASHED

Driver used to perform the scrape can crash for many reason


ERR::SCRAPE::DRIVER_TIMEOUT

Driver timeout - No response received


ERR::SCRAPE::NETWORK_ERROR

Network error happened between Scrapfly server and remote server


ERR::SCRAPE::NO_BROWSER_AVAILABLE

No browser available in the pool


ERR::SCRAPE::OPERATION_TIMEOUT

This is a generic error for when timeout occur. It happened when internal operation took too much time


ERR::SCRAPE::PROJECT_QUOTA_LIMIT_REACHED

The limit set to the current project has been reached


ERR::SCRAPE::SCENARIO_TIMEOUT

Javascript Scenario Timeout


ERR::SCRAPE::TOO_MANY_CONCURRENT_REQUEST

Account concurrency limit reached - Max concurrency request allowed by your current plan: 5


ERR::SCRAPE::UNABLE_TO_TAKE_SCREENSHOT

Unable to take screenshot, happened when renderer encounter bad formatted code


ERR::SCRAPE::UPSTREAM_WEBSITE_ERROR

Upstream website you scrape had issue


ERR::SESSION::CONCURRENT_ACCESS

Concurrent access to the session has been tried. If your spider run on distributed architecture, check if the correlation id is correctly configured


ERR::THROTTLE::MAX_CONCURRENT_REQUEST_EXCEEDED

Your scrape request has been throttled. Too many concurrent access to the upstream. If it's not expected, please check your throttle configuration for the given project and env.


ERR::THROTTLE::MAX_REQUEST_RATE_EXCEEDED

Your scrape request as been throttle. Too much request during the 1m window. If it's not expected, please check your throttle configuration for the given project and env


ERR::WEBHOOK::QUEUE_FULL

You reach the limit of scheduled webhook - You must wait pending webhook are processed


Session

ERR::ACCOUNT::PAYMENT_REQUIRED

Unable to charge last invoice - Connect to your dashboard to solve the issue


ERR::ACCOUNT::SUSPENDED

Account Suspended


ERR::ASP::CAPTCHA_TIMEOUT

The budgeted time to solve the captcha is reached


ERR::ASP::SHIELD_EXPIRED

The ASP shield previously set is expired, you must retry.

  • Retryable: Yes
  • HTTP status code: 422

ERR::ASP::SHIELD_PROTECTION_FAILED

The ASP shield failed to solve the challenge against the anti scrapping protection


ERR::ASP::TIMEOUT

The ASP made too much time to solve or respond


ERR::ASP::UNABLE_TO_SOLVE_CAPTCHA

Despite our effort, we were unable to solve the captcha. It can happened sporadically, please retry


ERR::PROXY::NOT_REACHABLE

Proxy was not reachable, it can happened when network issue or proxy itself is in trouble


ERR::PROXY::RESOURCES_SATURATION

Proxy are saturated for the desired country, you can try on other countries. They will come back as soon as possible


ERR::PROXY::TIMEOUT

Proxy do not respond in the given time or was too slow - We have removed it from the pool


ERR::PROXY::UNAVAILABLE

Proxy is unavailable - No proxy available for the target (can be restricted for some website)


ERR::SCRAPE::DRIVER_CRASHED

Driver used to perform the scrape can crash for many reason


ERR::SCRAPE::DRIVER_TIMEOUT

Driver timeout - No response received


ERR::SCRAPE::NETWORK_ERROR

Network error happened between Scrapfly server and remote server


ERR::SCRAPE::NO_BROWSER_AVAILABLE

No browser available in the pool


ERR::SCRAPE::OPERATION_TIMEOUT

This is a generic error for when timeout occur. It happened when internal operation took too much time


ERR::SCRAPE::PROJECT_QUOTA_LIMIT_REACHED

The limit set to the current project has been reached


ERR::SCRAPE::SCENARIO_TIMEOUT

Javascript Scenario Timeout


ERR::SCRAPE::TOO_MANY_CONCURRENT_REQUEST

Account concurrency limit reached - Max concurrency request allowed by your current plan: 5


ERR::SCRAPE::UNABLE_TO_TAKE_SCREENSHOT

Unable to take screenshot, happened when renderer encounter bad formatted code


ERR::SCRAPE::UPSTREAM_WEBSITE_ERROR

Upstream website you scrape had issue


ERR::SESSION::CONCURRENT_ACCESS

Concurrent access to the session has been tried. If your spider run on distributed architecture, check if the correlation id is correctly configured


ERR::THROTTLE::MAX_CONCURRENT_REQUEST_EXCEEDED

Your scrape request has been throttled. Too many concurrent access to the upstream. If it's not expected, please check your throttle configuration for the given project and env.


ERR::THROTTLE::MAX_REQUEST_RATE_EXCEEDED

Your scrape request as been throttle. Too much request during the 1m window. If it's not expected, please check your throttle configuration for the given project and env


ERR::WEBHOOK::QUEUE_FULL

You reach the limit of scheduled webhook - You must wait pending webhook are processed


Exportable Definition

If you want to handle errors from your application without copy-pasting the whole error definition into your application to match errors, here is a portable JSON of error definition:

{
    "scraper_errors": {
        "ERR::ACCOUNT::PAYMENT_REQUIRED": {
            "Code": "ERR::ACCOUNT::PAYMENT_REQUIRED",
            "HttpCode": 402,
            "Description": "Unable to charge last invoice - Connect to your dashboard to solve the issue",
            "Retryable": true,
            "Type": "ACCOUNT",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::ACCOUNT::PAYMENT_REQUIRED"
            }
        },
        "ERR::ACCOUNT::SUSPENDED": {
            "Code": "ERR::ACCOUNT::SUSPENDED",
            "HttpCode": 429,
            "Description": "Account Suspended",
            "Retryable": true,
            "Type": "ACCOUNT",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::ACCOUNT::SUSPENDED"
            }
        },
        "ERR::API::INTERNAL_ERROR": {
            "Code": "ERR::API::INTERNAL_ERROR",
            "HttpCode": 422,
            "Description": "API Internal Error",
            "Retryable": false,
            "Type": "SCRAPER",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::API::INTERNAL_ERROR"
            }
        },
        "ERR::ASP::CAPTCHA_ERROR": {
            "Code": "ERR::ASP::CAPTCHA_ERROR",
            "HttpCode": 422,
            "Description": "Something wrong happened with the captcha. We will figure out to fix the problem as soon as possible",
            "Retryable": false,
            "Type": "SCRAPER",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::ASP::CAPTCHA_ERROR"
            }
        },
        "ERR::ASP::CAPTCHA_TIMEOUT": {
            "Code": "ERR::ASP::CAPTCHA_TIMEOUT",
            "HttpCode": 422,
            "Description": "The budgeted time to solve the captcha is reached",
            "Retryable": true,
            "Type": "SCRAPER",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::ASP::CAPTCHA_TIMEOUT"
            }
        },
        "ERR::ASP::PROTECTION_FAILED": {
            "Code": "ERR::ASP::PROTECTION_FAILED",
            "HttpCode": 422,
            "Description": "The attempt to solved or bypass the bot protection failed for this time - Unfortunately it happened sometimes and you should retry this error if it's sporadic. If this issue always happened - check your config and ask support",
            "Retryable": false,
            "Type": "THROTTLER",
            "Links": {
                "Checkout ASP documentation": "https://scrapfly.io/docs/scrape-api/anti-scraping-protection#maximize_success_rate",
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::ASP::PROTECTION_FAILED"
            }
        },
        "ERR::ASP::SHIELD_ERROR": {
            "Code": "ERR::ASP::SHIELD_ERROR",
            "HttpCode": 422,
            "Description": "The ASP encounter an unexpected problem. We will fix it as soon as possible. Our team has been alerted",
            "Retryable": false,
            "Type": "SCRAPER",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::ASP::SHIELD_ERROR"
            }
        },
        "ERR::ASP::SHIELD_EXPIRED": {
            "Code": "ERR::ASP::SHIELD_EXPIRED",
            "HttpCode": 422,
            "Description": "The ASP shield previously set is expired, you must retry.",
            "Retryable": true,
            "Type": "SCRAPER",
            "Links": null
        },
        "ERR::ASP::SHIELD_PROTECTION_FAILED": {
            "Code": "ERR::ASP::SHIELD_PROTECTION_FAILED",
            "HttpCode": 422,
            "Description": "The ASP shield failed to solve the challenge against the anti scrapping protection",
            "Retryable": true,
            "Type": "SCRAPER",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::ASP::SHIELD_PROTECTION_FAILED"
            }
        },
        "ERR::ASP::TIMEOUT": {
            "Code": "ERR::ASP::TIMEOUT",
            "HttpCode": 422,
            "Description": "The ASP made too much time to solve or respond",
            "Retryable": true,
            "Type": "SCRAPER",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::ASP::TIMEOUT"
            }
        },
        "ERR::ASP::UNABLE_TO_SOLVE_CAPTCHA": {
            "Code": "ERR::ASP::UNABLE_TO_SOLVE_CAPTCHA",
            "HttpCode": 422,
            "Description": "Despite our effort, we were unable to solve the captcha. It can happened sporadically, please retry",
            "Retryable": true,
            "Type": "SCRAPER",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::ASP::UNABLE_TO_SOLVE_CAPTCHA"
            }
        },
        "ERR::ASP::UPSTREAM_UNEXPECTED_RESPONSE": {
            "Code": "ERR::ASP::UPSTREAM_UNEXPECTED_RESPONSE",
            "HttpCode": 422,
            "Description": "The response given by the upstream after challenge resolution is not expected. Our team has been alerted",
            "Retryable": false,
            "Type": "SCRAPER",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::ASP::UPSTREAM_UNEXPECTED_RESPONSE"
            }
        },
        "ERR::PROXY::NOT_REACHABLE": {
            "Code": "ERR::PROXY::NOT_REACHABLE",
            "HttpCode": 422,
            "Description": "Proxy was not reachable, it can happened when network issue or proxy itself is in trouble",
            "Retryable": true,
            "Type": "PROXY",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::PROXY::NOT_REACHABLE"
            }
        },
        "ERR::PROXY::POOL_NOT_AVAILABLE_FOR_TARGET": {
            "Code": "ERR::PROXY::POOL_NOT_AVAILABLE_FOR_TARGET",
            "HttpCode": 422,
            "Description": "The desired proxy pool is not available for the given domain - mostly well known protected domain which require at least residential networks",
            "Retryable": false,
            "Type": "PROXY",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::PROXY::POOL_NOT_AVAILABLE_FOR_TARGET"
            }
        },
        "ERR::PROXY::POOL_NOT_FOUND": {
            "Code": "ERR::PROXY::POOL_NOT_FOUND",
            "HttpCode": 422,
            "Description": "Provided Proxy Pool Name do not exists",
            "Retryable": false,
            "Type": "PROXY",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::PROXY::POOL_NOT_FOUND"
            }
        },
        "ERR::PROXY::POOL_UNAVAILABLE_COUNTRY": {
            "Code": "ERR::PROXY::POOL_UNAVAILABLE_COUNTRY",
            "HttpCode": 422,
            "Description": "Country not available for given proxy pool",
            "Retryable": false,
            "Type": "PROXY",
            "Links": {
                "Checkout Proxy Documentation": "https://scrapfly.io/docs/scrape-api/proxy",
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::PROXY::POOL_UNAVAILABLE_COUNTRY"
            }
        },
        "ERR::PROXY::RESOURCES_SATURATION": {
            "Code": "ERR::PROXY::RESOURCES_SATURATION",
            "HttpCode": 422,
            "Description": "Proxy are saturated for the desired country, you can try on other countries. They will come back as soon as possible",
            "Retryable": true,
            "Type": "PROXY",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::PROXY::RESOURCES_SATURATION"
            }
        },
        "ERR::PROXY::TIMEOUT": {
            "Code": "ERR::PROXY::TIMEOUT",
            "HttpCode": 422,
            "Description": "Proxy do not respond in the given time or was too slow - We have removed it from the pool",
            "Retryable": true,
            "Type": "PROXY",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::PROXY::TIMEOUT"
            }
        },
        "ERR::PROXY::UNAVAILABLE": {
            "Code": "ERR::PROXY::UNAVAILABLE",
            "HttpCode": 422,
            "Description": "Proxy is unavailable - No proxy available for the target (can be restricted for some website)",
            "Retryable": true,
            "Type": "PROXY",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::PROXY::UNAVAILABLE"
            }
        },
        "ERR::SCHEDULE::DISABLED": {
            "Code": "ERR::SCHEDULE::DISABLED",
            "HttpCode": 422,
            "Description": "The targeted schedule has been disabled",
            "Retryable": false,
            "Type": "SCHEDULER",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::SCHEDULE::DISABLED"
            }
        },
        "ERR::SCRAPE::BAD_PROTOCOL": {
            "Code": "ERR::SCRAPE::BAD_PROTOCOL",
            "HttpCode": 422,
            "Description": "The protocol is not supported : http:// or https://",
            "Retryable": false,
            "Type": "SCRAPER",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::BAD_PROTOCOL"
            }
        },
        "ERR::SCRAPE::BAD_UPSTREAM_RESPONSE": {
            "Code": "ERR::SCRAPE::BAD_UPSTREAM_RESPONSE",
            "HttpCode": 200,
            "Description": "Upstream respond with http code >= 299",
            "Retryable": false,
            "Type": "SCRAPER",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::BAD_UPSTREAM_RESPONSE"
            }
        },
        "ERR::SCRAPE::DNS_NAME_NOT_RESOLVED": {
            "Code": "ERR::SCRAPE::DNS_NAME_NOT_RESOLVED",
            "HttpCode": 422,
            "Description": "The DNS of upstream website is not resolving or not responding",
            "Retryable": false,
            "Type": "SCRAPER",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::DNS_NAME_NOT_RESOLVED"
            }
        },
        "ERR::SCRAPE::DOMAIN_NOT_ALLOWED": {
            "Code": "ERR::SCRAPE::DOMAIN_NOT_ALLOWED",
            "HttpCode": 422,
            "Description": "The Domain targeted is not allowed or restricted",
            "Retryable": false,
            "Type": "SCRAPER",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::DOMAIN_NOT_ALLOWED"
            }
        },
        "ERR::SCRAPE::DOM_SELECTOR_INVISIBLE": {
            "Code": "ERR::SCRAPE::DOM_SELECTOR_INVISIBLE",
            "HttpCode": 422,
            "Description": "The requested DOM selected is invisible (Mostly issued when element is targeted for screenshot)",
            "Retryable": false,
            "Type": "SCRAPER",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::DOM_SELECTOR_INVISIBLE"
            }
        },
        "ERR::SCRAPE::DOM_SELECTOR_NOT_FOUND": {
            "Code": "ERR::SCRAPE::DOM_SELECTOR_NOT_FOUND",
            "HttpCode": 422,
            "Description": "The requested DOM selected was not found in rendered content within 15s",
            "Retryable": false,
            "Type": "SCRAPER",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::DOM_SELECTOR_NOT_FOUND"
            }
        },
        "ERR::SCRAPE::DRIVER_CRASHED": {
            "Code": "ERR::SCRAPE::DRIVER_CRASHED",
            "HttpCode": 422,
            "Description": "Driver used to perform the scrape can crash for many reason",
            "Retryable": true,
            "Type": "SCRAPER",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::DRIVER_CRASHED"
            }
        },
        "ERR::SCRAPE::DRIVER_TIMEOUT": {
            "Code": "ERR::SCRAPE::DRIVER_TIMEOUT",
            "HttpCode": 422,
            "Description": "Driver timeout - No response received",
            "Retryable": true,
            "Type": "SCRAPER",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::DRIVER_TIMEOUT"
            }
        },
        "ERR::SCRAPE::JAVASCRIPT_EXECUTION": {
            "Code": "ERR::SCRAPE::JAVASCRIPT_EXECUTION",
            "HttpCode": 422,
            "Description": "The javascript to execute goes wrong, please read the associated message to figure out the problem",
            "Retryable": false,
            "Type": "SCRAPER",
            "Links": {
                "Checkout Javascript Rendering Documentation": "https://scrapfly.io/docs/scrape-api/javascript-rendering",
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::JAVASCRIPT_EXECUTION"
            }
        },
        "ERR::SCRAPE::NETWORK_ERROR": {
            "Code": "ERR::SCRAPE::NETWORK_ERROR",
            "HttpCode": 422,
            "Description": "Network error happened between Scrapfly server and remote server",
            "Retryable": true,
            "Type": "SCRAPER",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::NETWORK_ERROR"
            }
        },
        "ERR::SCRAPE::NETWORK_SERVER_DISCONNECTED": {
            "Code": "ERR::SCRAPE::NETWORK_SERVER_DISCONNECTED",
            "HttpCode": 422,
            "Description": "Server of upstream website closed unexpectedly the connection",
            "Retryable": false,
            "Type": "SCRAPER",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::NETWORK_SERVER_DISCONNECTED"
            }
        },
        "ERR::SCRAPE::NO_BROWSER_AVAILABLE": {
            "Code": "ERR::SCRAPE::NO_BROWSER_AVAILABLE",
            "HttpCode": 422,
            "Description": "No browser available in the pool",
            "Retryable": true,
            "Type": "SCRAPER",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::NO_BROWSER_AVAILABLE"
            }
        },
        "ERR::SCRAPE::OPERATION_TIMEOUT": {
            "Code": "ERR::SCRAPE::OPERATION_TIMEOUT",
            "HttpCode": 422,
            "Description": "This is a generic error for when timeout occur. It happened when internal operation took too much time",
            "Retryable": true,
            "Type": "SCRAPER",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::OPERATION_TIMEOUT"
            }
        },
        "ERR::SCRAPE::PROJECT_QUOTA_LIMIT_REACHED": {
            "Code": "ERR::SCRAPE::PROJECT_QUOTA_LIMIT_REACHED",
            "HttpCode": 429,
            "Description": "The limit set to the current project has been reached",
            "Retryable": true,
            "Type": "SCRAPER",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::PROJECT_QUOTA_LIMIT_REACHED"
            }
        },
        "ERR::SCRAPE::QUOTA_LIMIT_REACHED": {
            "Code": "ERR::SCRAPE::QUOTA_LIMIT_REACHED",
            "HttpCode": 429,
            "Description": "You reach your scrape quota plan for the month. You can upgrade your plan if you want increase the quota",
            "Retryable": false,
            "Type": "SCRAPER",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::QUOTA_LIMIT_REACHED"
            }
        },
        "ERR::SCRAPE::SCENARIO_DEADLINE_OVERFLOW": {
            "Code": "ERR::SCRAPE::SCENARIO_DEADLINE_OVERFLOW",
            "HttpCode": 422,
            "Description": "Submitted scenario would require more than 30s to complete",
            "Retryable": false,
            "Type": "SCRAPER",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::SCENARIO_DEADLINE_OVERFLOW"
            }
        },
        "ERR::SCRAPE::SCENARIO_EXECUTION": {
            "Code": "ERR::SCRAPE::SCENARIO_EXECUTION",
            "HttpCode": 422,
            "Description": "Javascript Scenario Failed",
            "Retryable": false,
            "Type": "SCRAPER",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::SCENARIO_EXECUTION"
            }
        },
        "ERR::SCRAPE::SCENARIO_TIMEOUT": {
            "Code": "ERR::SCRAPE::SCENARIO_TIMEOUT",
            "HttpCode": 422,
            "Description": "Javascript Scenario Timeout",
            "Retryable": true,
            "Type": "SCRAPER",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::SCENARIO_EXECUTION"
            }
        },
        "ERR::SCRAPE::SSL_ERROR": {
            "Code": "ERR::SCRAPE::SSL_ERROR",
            "HttpCode": 422,
            "Description": "ScrapeEngineError due to SSL, mostly due to handshake error",
            "Retryable": false,
            "Type": "SCRAPER",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::SSL_ERROR"
            }
        },
        "ERR::SCRAPE::TOO_MANY_CONCURRENT_REQUEST": {
            "Code": "ERR::SCRAPE::TOO_MANY_CONCURRENT_REQUEST",
            "HttpCode": 429,
            "Description": "Account concurrency limit reached - Max concurrency request allowed by your current plan: 5",
            "Retryable": true,
            "Type": "SCRAPER",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::TOO_MANY_CONCURRENT_REQUEST"
            }
        },
        "ERR::SCRAPE::UNABLE_TO_TAKE_SCREENSHOT": {
            "Code": "ERR::SCRAPE::UNABLE_TO_TAKE_SCREENSHOT",
            "HttpCode": 422,
            "Description": "Unable to take screenshot, happened when renderer encounter bad formatted code",
            "Retryable": true,
            "Type": "SCRAPER",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::UNABLE_TO_TAKE_SCREENSHOT"
            }
        },
        "ERR::SCRAPE::UPSTREAM_TIMEOUT": {
            "Code": "ERR::SCRAPE::UPSTREAM_TIMEOUT",
            "HttpCode": 422,
            "Description": "Upstream website made too much time to response",
            "Retryable": false,
            "Type": "SCRAPER",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::UPSTREAM_TIMEOUT"
            }
        },
        "ERR::SCRAPE::UPSTREAM_WEBSITE_ERROR": {
            "Code": "ERR::SCRAPE::UPSTREAM_WEBSITE_ERROR",
            "HttpCode": 422,
            "Description": "Upstream website you scrape had issue",
            "Retryable": true,
            "Type": "SCRAPER",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::SCRAPE::UPSTREAM_WEBSITE_ERROR"
            }
        },
        "ERR::SESSION::CONCURRENT_ACCESS": {
            "Code": "ERR::SESSION::CONCURRENT_ACCESS",
            "HttpCode": 429,
            "Description": "Concurrent access to the session has been tried. If your spider run on distributed architecture, check if the correlation id is correctly configured",
            "Retryable": true,
            "Type": "SCRAPER",
            "Links": {
                "Checkout Session Documentation": "https://scrapfly.io/docs/scrape-api/session",
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::SESSION::CONCURRENT_ACCESS"
            }
        },
        "ERR::THROTTLE::MAX_CONCURRENT_REQUEST_EXCEEDED": {
            "Code": "ERR::THROTTLE::MAX_CONCURRENT_REQUEST_EXCEEDED",
            "HttpCode": 429,
            "Description": "Your scrape request has been throttled. Too many concurrent access to the upstream. If it's not expected, please check your throttle configuration for the given project and env.",
            "Retryable": true,
            "Type": "THROTTLER",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::THROTTLE::MAX_CONCURRENT_REQUEST_EXCEEDED"
            }
        },
        "ERR::THROTTLE::MAX_REQUEST_RATE_EXCEEDED": {
            "Code": "ERR::THROTTLE::MAX_REQUEST_RATE_EXCEEDED",
            "HttpCode": 429,
            "Description": "Your scrape request as been throttle. Too much request during the 1m window. If it's not expected, please check your throttle configuration for the given project and env",
            "Retryable": true,
            "Type": "THROTTLER",
            "Links": {
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::THROTTLE::MAX_REQUEST_RATE_EXCEEDED"
            }
        },
        "ERR::WEBHOOK::DISABLED": {
            "Code": "ERR::WEBHOOK::DISABLED",
            "HttpCode": 422,
            "Description": "Given webhook is disabled, please check out your webhook configuration for the current project / env",
            "Retryable": false,
            "Type": "WEBHOOK",
            "Links": {
                "Checkout Webhook Documentation": "https://scrapfly.io/docs/scrape-api/webhook",
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::WEBHOOK::DISABLED"
            }
        },
        "ERR::WEBHOOK::MAX_RETRY": {
            "Code": "ERR::WEBHOOK::MAX_RETRY",
            "HttpCode": 429,
            "Description": "Maximum retry exceeded on your webhook",
            "Retryable": false,
            "Type": "WEBHOOK",
            "Links": {
                "Checkout Webhook Documentation": "https://scrapfly.io/docs/scrape-api/webhook",
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::WEBHOOK::MAX_RETRY"
            }
        },
        "ERR::WEBHOOK::NOT_FOUND": {
            "Code": "ERR::WEBHOOK::NOT_FOUND",
            "HttpCode": 422,
            "Description": "Unable to find the given webhook for the current project / env",
            "Retryable": false,
            "Type": "WEBHOOK",
            "Links": {
                "Checkout Webhook Documentation": "https://scrapfly.io/docs/scrape-api/webhook",
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::WEBHOOK::NOT_FOUND"
            }
        },
        "ERR::WEBHOOK::QUEUE_FULL": {
            "Code": "ERR::WEBHOOK::QUEUE_FULL",
            "HttpCode": 429,
            "Description": "You reach the limit of scheduled webhook - You must wait pending webhook are processed",
            "Retryable": true,
            "Type": "WEBHOOK",
            "Links": {
                "Checkout Webhook Documentation": "https://scrapfly.io/docs/scrape-api/webhook",
                "Related Error Doc": "https://scrapfly.io/docs/scrape-api/error/ERR::WEBHOOK::QUEUE_FULL"
            }
        }
    }
}