# Scrapfly Documentation

## Table of Contents

### Dashboard

- [Intro](https://scrapfly.io/docs)
- [Project](https://scrapfly.io/docs/project)
- [Account](https://scrapfly.io/docs/account)
- [Workspace & Team](https://scrapfly.io/docs/workspace-and-team)
- [Billing](https://scrapfly.io/docs/billing)

### Products

#### MCP Server

- [Getting Started](https://scrapfly.io/docs/mcp/getting-started)
- [Tools & API Spec](https://scrapfly.io/docs/mcp/tools)
- [Authentication](https://scrapfly.io/docs/mcp/authentication)
- [Examples & Use Cases](https://scrapfly.io/docs/mcp/examples)
- [FAQ](https://scrapfly.io/docs/mcp/faq)
##### Integrations

- [Overview](https://scrapfly.io/docs/mcp/integrations)
- [Claude Desktop](https://scrapfly.io/docs/mcp/integrations/claude-desktop)
- [Claude Code](https://scrapfly.io/docs/mcp/integrations/claude-code)
- [ChatGPT](https://scrapfly.io/docs/mcp/integrations/chatgpt)
- [Cursor](https://scrapfly.io/docs/mcp/integrations/cursor)
- [Cline](https://scrapfly.io/docs/mcp/integrations/cline)
- [Windsurf](https://scrapfly.io/docs/mcp/integrations/windsurf)
- [Zed](https://scrapfly.io/docs/mcp/integrations/zed)
- [Roo Code](https://scrapfly.io/docs/mcp/integrations/roo-code)
- [VS Code](https://scrapfly.io/docs/mcp/integrations/vscode)
- [LangChain](https://scrapfly.io/docs/mcp/integrations/langchain)
- [LlamaIndex](https://scrapfly.io/docs/mcp/integrations/llamaindex)
- [CrewAI](https://scrapfly.io/docs/mcp/integrations/crewai)
- [OpenAI](https://scrapfly.io/docs/mcp/integrations/openai)
- [n8n](https://scrapfly.io/docs/mcp/integrations/n8n)
- [Make](https://scrapfly.io/docs/mcp/integrations/make)
- [Zapier](https://scrapfly.io/docs/mcp/integrations/zapier)
- [Vapi AI](https://scrapfly.io/docs/mcp/integrations/vapi)
- [Agent Builder](https://scrapfly.io/docs/mcp/integrations/agent-builder)
- [Custom Client](https://scrapfly.io/docs/mcp/integrations/custom-client)


#### Web Scraping API

- [Getting Started](https://scrapfly.io/docs/scrape-api/getting-started)
- [API Specification]()
- [Monitoring](https://scrapfly.io/docs/monitoring)
- [Customize Request](https://scrapfly.io/docs/scrape-api/custom)
- [Debug](https://scrapfly.io/docs/scrape-api/debug)
- [Anti Scraping Protection](https://scrapfly.io/docs/scrape-api/anti-scraping-protection)
- [Proxy](https://scrapfly.io/docs/scrape-api/proxy)
- [Proxy Mode](https://scrapfly.io/docs/scrape-api/proxy-mode)
- [Proxy Mode - Screaming Frog](https://scrapfly.io/docs/scrape-api/proxy-mode/screaming-frog)
- [Proxy Mode - Apify](https://scrapfly.io/docs/scrape-api/proxy-mode/apify)
- [(Auto) Data Extraction](https://scrapfly.io/docs/scrape-api/extraction)
- [Javascript Rendering](https://scrapfly.io/docs/scrape-api/javascript-rendering)
- [Javascript Scenario](https://scrapfly.io/docs/scrape-api/javascript-scenario)
- [SSL](https://scrapfly.io/docs/scrape-api/ssl)
- [DNS](https://scrapfly.io/docs/scrape-api/dns)
- [Cache](https://scrapfly.io/docs/scrape-api/cache)
- [Session](https://scrapfly.io/docs/scrape-api/session)
- [Webhook](https://scrapfly.io/docs/scrape-api/webhook)
- [Screenshot](https://scrapfly.io/docs/scrape-api/screenshot)
- [Errors](https://scrapfly.io/docs/scrape-api/errors)
- [Timeout](https://scrapfly.io/docs/scrape-api/understand-timeout)
- [Throttling](https://scrapfly.io/docs/throttling)
- [Troubleshoot](https://scrapfly.io/docs/scrape-api/troubleshoot)
- [Billing](https://scrapfly.io/docs/scrape-api/billing)
- [FAQ](https://scrapfly.io/docs/scrape-api/faq)

#### Crawler API

- [Getting Started](https://scrapfly.io/docs/crawler-api/getting-started)
- [API Specification]()
- [Retrieving Results](https://scrapfly.io/docs/crawler-api/results)
- [WARC Format](https://scrapfly.io/docs/crawler-api/warc-format)
- [Data Extraction](https://scrapfly.io/docs/crawler-api/extraction-rules)
- [Webhook](https://scrapfly.io/docs/crawler-api/webhook)
- [Billing](https://scrapfly.io/docs/crawler-api/billing)
- [Errors](https://scrapfly.io/docs/crawler-api/errors)
- [Troubleshoot](https://scrapfly.io/docs/crawler-api/troubleshoot)
- [FAQ](https://scrapfly.io/docs/crawler-api/faq)

#### Screenshot API

- [Getting Started](https://scrapfly.io/docs/screenshot-api/getting-started)
- [API Specification]()
- [Accessibility Testing](https://scrapfly.io/docs/screenshot-api/accessibility)
- [Webhook](https://scrapfly.io/docs/screenshot-api/webhook)
- [Billing](https://scrapfly.io/docs/screenshot-api/billing)
- [Errors](https://scrapfly.io/docs/screenshot-api/errors)

#### Extraction API

- [Getting Started](https://scrapfly.io/docs/extraction-api/getting-started)
- [API Specification]()
- [Rules Template](https://scrapfly.io/docs/extraction-api/rules-and-template)
- [LLM Extraction](https://scrapfly.io/docs/extraction-api/llm-prompt)
- [AI Auto Extraction](https://scrapfly.io/docs/extraction-api/automatic-ai)
- [Webhook](https://scrapfly.io/docs/extraction-api/webhook)
- [Billing](https://scrapfly.io/docs/extraction-api/billing)
- [Errors](https://scrapfly.io/docs/extraction-api/errors)
- [FAQ](https://scrapfly.io/docs/extraction-api/faq)

#### Proxy Saver

- [Getting Started](https://scrapfly.io/docs/proxy-saver/getting-started)
- [Fingerprints](https://scrapfly.io/docs/proxy-saver/fingerprints)
- [Optimizations](https://scrapfly.io/docs/proxy-saver/optimizations)
- [SSL Certificates](https://scrapfly.io/docs/proxy-saver/certificates)
- [Protocols](https://scrapfly.io/docs/proxy-saver/protocols)
- [Pacfile](https://scrapfly.io/docs/proxy-saver/pacfile)
- [Secure Credentials](https://scrapfly.io/docs/proxy-saver/security)
- [Billing](https://scrapfly.io/docs/proxy-saver/billing)

#### Cloud Browser API

- [Getting Started](https://scrapfly.io/docs/cloud-browser-api/getting-started)
- [Proxy & Geo-Targeting](https://scrapfly.io/docs/cloud-browser-api/proxy)
- [Unblock API](https://scrapfly.io/docs/cloud-browser-api/unblock)
- [File Downloads](https://scrapfly.io/docs/cloud-browser-api/file-downloads)
- [Session Resume](https://scrapfly.io/docs/cloud-browser-api/session-resume)
- [Human-in-the-Loop](https://scrapfly.io/docs/cloud-browser-api/human-in-the-loop)
- [Debug Mode](https://scrapfly.io/docs/cloud-browser-api/debug-mode)
- [Bring Your Own Proxy](https://scrapfly.io/docs/cloud-browser-api/bring-your-own-proxy)
- [Browser Extensions](https://scrapfly.io/docs/cloud-browser-api/extensions)
##### Integrations

- [Puppeteer](https://scrapfly.io/docs/cloud-browser-api/puppeteer)
- [Playwright](https://scrapfly.io/docs/cloud-browser-api/playwright)
- [Selenium](https://scrapfly.io/docs/cloud-browser-api/selenium)
- [Vercel Agent Browser](https://scrapfly.io/docs/cloud-browser-api/agent-browser)
- [Browser Use](https://scrapfly.io/docs/cloud-browser-api/browser-use)
- [Stagehand](https://scrapfly.io/docs/cloud-browser-api/stagehand)

- [Billing](https://scrapfly.io/docs/cloud-browser-api/billing)
- [Errors](https://scrapfly.io/docs/cloud-browser-api/errors)


### Tools

- [Antibot Detector](https://scrapfly.io/docs/tools/antibot-detector)

### SDK

- [Golang](https://scrapfly.io/docs/sdk/golang)
- [Python](https://scrapfly.io/docs/sdk/python)
- [TypeScript](https://scrapfly.io/docs/sdk/typescript)
- [Scrapy](https://scrapfly.io/docs/sdk/scrapy)

### Integrations

- [Getting Started](https://scrapfly.io/docs/integration/getting-started)
- [LangChain](https://scrapfly.io/docs/integration/langchain)
- [LlamaIndex](https://scrapfly.io/docs/integration/llamaindex)
- [CrewAI](https://scrapfly.io/docs/integration/crewai)
- [Zapier](https://scrapfly.io/docs/integration/zapier)
- [Make](https://scrapfly.io/docs/integration/make)
- [n8n](https://scrapfly.io/docs/integration/n8n)

### Academy

- [Overview](https://scrapfly.io/academy)
- [Web Scraping Overview](https://scrapfly.io/academy/scraping-overview)
- [Tools](https://scrapfly.io/academy/tools-overview)
- [Reverse Engineering](https://scrapfly.io/academy/reverse-engineering)
- [Static Scraping](https://scrapfly.io/academy/static-scraping)
- [HTML Parsing](https://scrapfly.io/academy/html-parsing)
- [Dynamic Scraping](https://scrapfly.io/academy/dynamic-scraping)
- [Hidden API Scraping](https://scrapfly.io/academy/hidden-api-scraping)
- [Headless Browsers](https://scrapfly.io/academy/headless-browsers)
- [Hidden Web Data](https://scrapfly.io/academy/hidden-web-data)
- [JSON Parsing](https://scrapfly.io/academy/json-parsing)
- [Data Processing](https://scrapfly.io/academy/data-processing)
- [Scaling](https://scrapfly.io/academy/scaling)
- [Walkthrough Summary](https://scrapfly.io/academy/walkthrough-summary)
- [Scraper Blocking](https://scrapfly.io/academy/scraper-blocking)
- [Proxies](https://scrapfly.io/academy/proxies)

---

# Anti Scraping Protection (ASP)

 [  View as markdown ](https://scrapfly.io/?view=markdown)   Copy for LLM    Copy for LLM  [     Open in ChatGPT ](https://chatgpt.com/?hints=search&prompt=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fscrape-api%2Fanti-scraping-protection%3Flanguage%3Dnode_js%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Claude ](https://claude.ai/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fscrape-api%2Fanti-scraping-protection%3Flanguage%3Dnode_js%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Perplexity ](https://www.perplexity.ai/search/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fscrape-api%2Fanti-scraping-protection%3Flanguage%3Dnode_js%20so%20I%20can%20ask%20questions%20about%20it.) 

 

 

  Service Level Agreenment (SLA) and Service Interruption  **Service interruptions may occasionally occur** independent of Scrapfly's control. As anti scraping protection technology is constantly evolving Scrapfly engineers are working hard to keep up with the latest changes. This may take hours to days to weeks to implement as a reliable production-grade remedy. It is essential to bear this in mind and develop your software accordingly when utilizing this feature. 

 Please note that - We can't provide ETA regarding service restoration due to the R&amp;D, however with the volume we handle and the number of corporates account we have, most of incidents are resolved 1 business days on well known anti bot, on average around from 3 to 7 business days.
- The API Credit cost may fluctuate, if a website introduce new protection(s) or migrate to another anti bot server. The underlying resources required to handle it can change (residential network, browser usage, custom solution)
- SLA plan are available from a minimum commitment of **$50k/Month**
 
 

  Scrapfly's Anti-Scraping Protection is designed to unblock protected websites that are inaccessible to bots. We accomplish this by incorporating various concepts that help maintain a coherent fingerprint, making it as close to that of a real user as possible when scraping a website.

 To use ASP just enable [asp parameter](https://scrapfly.io/docs/scrape-api/getting-started?language=node_js#api_param_asp) in your API call.

 Scrapfly is capable of identifying and resolving obstacles posed by commonly used anti-scraping measures. Our platform also provides support for custom anti-scraping measures implemented by popular websites. Scrapfly ASP bypass does not require any extra input from you, and **you will receive successful responses automatically**.

---

 If you are interested in understanding the technical aspects of how we achieve this undetectability, we have published a series of articles on the subject available in the [learning resources section](#learning_resources) below.

  Usage and Abuse Limitation To summarize our [TOS](https://scrapfly.io/terms-of-service), following usage are prohibited: - Automated Online Payment
- Account Creation
- Spam Posts
- Vote Falsification
- Credit Card Testing
- Login Brute Force
- Referral / Gifting systems
- Ads fraud
- Banks
- Ticketing (Automated Buying System)
- Betting, Casino, Gambling
 
  The use of ASP can be authorized for use by cybersecurity firms (red teams) after obtaining approval from the relevant parties for the specific domains they wish to test.  

 ## Usage

 When **ASP** is enabled, anti-bot solution vendor are automatically detected and everything is managed to bypass it.

 [NodeJS](#player-d0ff17) [HTTP](#http-d0ff17) 

   [  ](https://scrapfly.io/login "Sign in to try from the API player") 

 

 ```
const params = new URLSearchParams({
  "asp": true,
  "key": "__API_KEY__",
  "url": "https://httpbin.dev/anything",
});

const url = "https://api.scrapfly.io/scrape?" + params.toString();

const options = {
  method: "GET",
};

try {
  const response = await fetch(url, options);

  if (!response.ok) {
    const errorData = await response.json();
    const errorMsg = errorData.message || errorData.description || 'Request failed';
    throw new Error(`HTTP error ${response.status}: ${errorMsg}`);
  }

  const data = await response.json();
  console.log(data);

  // Access the scrape result
  if (data.result) {
    console.log(data.result);
  }
} catch (error) {
  console.error("Error:", error.message);
  throw error;
}
```

 

 ```
https://api.scrapfly.io/scrape?asp=true&key=&url=https%3A%2F%2Fhttpbin.dev%2Fanything
```

 

 

 

 **ASP will fine-tune some parameters regardless of user configuration. Some examples are listed below:**

 These adjustments can increase the request credit price and for that check the [pricing section for more details](#pricing).

- **[Proxy Pool:](https://scrapfly.io/docs/scrape-api/proxy?language=node_js)** ASP can access exclusive private proxy pools specific to scraped targets or upgrade to a better general proxy pool.
- **[Browser Usage:](https://scrapfly.io/docs/scrape-api/javascript-rendering?language=node_js)** ASP might enable it to bypass pages that require javascript.
- **[Headers:](https://scrapfly.io/docs/scrape-api/custom?language=node_js)** Some browser headers set by you might be ignored or modified. Headers based on resource type (image, file, html etc) and referer can be fine-tuned as well. We can also add custom headers if the target require or challenge method require them. 
    - `referer` is auto generated if not present, you can pass `none` as header value to no pass any `referer` header to the target website
    - `cookie` ASP auto handle session usage and reuse challenge cookies for faster result
    - `accept` can be changed regarding the type of resources (images, script, json, xhr, etc)
    - `content-type` based on request body and website target format
    - `user-agent`: Make sure to set a custom user-agent only when required by the target website, as the user agent is already managed by ASP for optimal bypass. 
        - Chrome based user agent are ignored and will be replaced by the one provided for the fingerprint
        - Non-Chrome user agent are left untouched
- **[Country:](https://scrapfly.io/docs/scrape-api/proxy?language=node_js)** Base on target website location and usual traffic, ASP might fine-tune the proxy country. If you set [country](https://scrapfly.io/docs/scrape-api/getting-started?language=node_js#api_param_country) explicitly the ASP will respect this.
- **[OS:](https://scrapfly.io/docs/scrape-api/custom?language=node_js)** To align fingerprint for optimal bypass we may change the OS and related headers based on the exit proxy hardware.
- **Body:** JSON are re-encoded to produce the same serialized output as a real web browser.
 
## ASP Limitations

 While popular anti-bot vendors can be bypassed without any additional effort, there are still some areas that require manual configuration of calls.

 For best results, it's important to understand how the target websites work and replicate their behavior in scraping calls. ASP bypass handles bot detection, and it's up to the user to configure last mile settings to avoid identification through use patterns.

### How to avoid anti bot detection on POST request

 Avoiding anti-bot detection on a POST request can be tricky, but there are some key areas to focus on:

1. Mimic a real user's behavior: Anti-bot systems often check for unusual behavior that may indicate a bot, such as a high number of requests from the same IP address or at the same time. You can mimic a real user's behavior by visiting some pages to retrieve navigation cookies/referers urls.
2. Handling CSRF: Cross-Site Request Forgery (CSRF) is a common anti-bot measure used by websites.
    
     For more, see these tutorials and resources:
    
    
    - [CSRF header tutorial](https://scrapfly.io/scrapeground/headers/csrf) on Scrapfly's Scrapeground.
    - [introduction to headers in scraper blocking](https://scrapfly.io/blog/posts/how-to-avoid-web-scraping-blocking-headers/) blog post.
3. Use realistic headers: Anti-bot systems can detect bots by looking at the headers of the requests. You should try to replicate the headers of a real user's request as closely as possible. This includes the `Accept`, `Content-Type`, `Referer` and `Origin` headers. Make sure to configure correctly the value of `Accept` and `Content-Type` regarding the content you expect (json, html).
    
     For more, see these tutorials and resources:
    
    
    - [Referer](https://scrapfly.io/scrapeground/headers/referer) header tutorial on Scrapfly's Scrapeground.
    - [introduction to headers in scraper blocking](https://scrapfly.io/blog/posts/how-to-avoid-web-scraping-blocking-headers/) blog post.
4. Authentication: If the website requires authentication, make sure you include the correct credentials in your request. This might involve logging in to the website first, then including the session cookie or token in your POST request. If the API/Website requires it, ASP is not able to manage this, you must handle it on your side.
    
     For more, see these tutorials and resources:
    
    
    - [Cookies authentication](https://scrapfly.io/scrapeground/cookies) tutorial on Scrapfly's Scrapeground.
 
 Overall, the key to bypassing anti-bot measures on a POST request is to replicate the headers, cookies, and authentication of a regular browser request as closely as possible. This requires careful inspection of the website's code and network traffic to identify the required elements.

### Website with Private/Hidden API

 Scraping a private API can be a bit more challenging than scraping public APIs. Here are some recommendations to follow:

1. Make sure you have permission: Before scraping any private API, make sure you have the necessary permission from the website owner or API provider. Scraping a private API without permission can result in legal consequences.
2. Mimic a real user: When scraping a private API, it's important to mimic a real user as closely as possible. This means sending the same headers and parameters that a real user would send when accessing the API.
3. Use authentication: Most private APIs require some form of authentication, such as a token or API key. Make sure you obtain the necessary credentials and use them in your requests.
4. Monitor for changes: Private APIs can change over time, so it's important to monitor for any changes in the API's structure or authentication requirements. If you notice any changes, update your scraping code accordingly.
 
 Overall, scraping private APIs requires more attention to detail and careful configuration of requests. Following these recommendations can help ensure a successful and ethical scraping process.

## Maximize Your Success Rate

#### Network Quality

 In many cases, datacenter IPs are sufficient. However, anti-bot vendors may check the origin of the IP when protecting websites, to determine if the traffic comes from a datacenter or a regular connection. In such cases, residential networks can provide a better IP reputation, as they are registered under a regular ASN that helps control the origin of the IP.

- [Introduction To Proxies in Web Scraping](https://scrapfly.io/blog/posts/introduction-to-proxies-in-web-scraping/)
- [How to Avoid Web Scraping Blocking: IP Address Guide](https://scrapfly.io/blog/posts/how-to-avoid-web-scraping-blocking-ip-addresses/)
- [Learn how to change the network type](https://scrapfly.io/docs/scrape-api/proxy?language=node_js#api)
 
> **API Usage:** `proxy_pool=public_residential_pool`, [checkout the related documentation](https://scrapfly.io/docs/scrape-api/getting-started?language=node_js#api_param_proxy_pool "Select Residential Proxy via HTTP API")

#### Use a Browser

 Most anti bots check the browser fingerprint and javascript engine to generate detection metrics.

> **API Usage:** `render_js=true`, [checkout the related documentation](https://scrapfly.io/docs/scrape-api/getting-started?language=node_js#api_param_render_js "Enable browser rendering via HTTP API")

#### Verify Cookies and Headers

 Observe headers/cookies of regular calls that are successful; you can figure out if you need to add extra headers or retrieve specific cookies to authenticate. You can use [the dev tool and inspect the network activity](https://developer.chrome.com/docs/devtools/network/).

- [What are Chrome Devtools?](https://scrapfly.io/blog/answers/browser-developer-tools-in-web-scraping/)
- [How to Avoid Web Scraping Blocking: Headers Guide](https://scrapfly.io/blog/posts/how-to-avoid-web-scraping-blocking-headers/)
 
> **API Usage:** `headers[referer]=https%3A%2F%2Fweb-scraping.dev` (value is [url encoded](https://scrapfly.io/web-scraping-tools/urlencode "URL encode")), [checkout the related documentation](https://scrapfly.io/docs/scrape-api/getting-started?language=node_js#api_param_headers "Customize API headers")

#### Navigation Coherence

 To ensure navigation coherence when scraping unofficial APIs, you may need to obtain cookies from your navigation. One way to do this is by enabling session and rendering JavaScript during the initial scraping to retrieve cookies. Once the cookies are stored in your session, you can continue scraping without rendering JavaScript while still applying the previously obtained cookies for consistency. The following Scrapfly features you must take a look to achieve that:

- [Using Session (sticky proxy - keep same ip, Cookies memory)](https://scrapfly.io/docs/scrape-api/session?language=node_js)
- [Javascript Rendering - Headless browser](https://scrapfly.io/docs/scrape-api/javascript-rendering?language=node_js)
 
> **API Usage:** `session=my-unique-session-name`, [checkout the related documentation](https://scrapfly.io/docs/scrape-api/getting-started?language=node_js#api_param_session "Session Usage")

> **API Usage:** `render_js=true`, [checkout the related documentation](https://scrapfly.io/docs/scrape-api/getting-started?language=node_js#api_param_render_js "Javascript Rendering - Headless browser")

#### Geo Blocking

 When browsing certain websites, users may encounter blocks based on their IP location. Scrapfly can bypass this issue by default, as it selects a random country from its pool. However, specifying the country based on the location of the website can be a helpful way to avoid geo-blocking.

> **API Usage:** `country=us`, [checkout the related documentation](https://scrapfly.io/docs/scrape-api/getting-started?language=node_js#api_param_country "Select country via HTTP API")

## Pricing

 Our Anti-Scraping Protection (ASP) solution is a sophisticated tool that provides advanced protection against scraping attempts. It is designed to adapt to various anti-scraping measures implemented on different websites. To achieve this, the ASP dynamically fine-tune your configuration parameters based on the target and the anti-scraping solution in place, and this can have an impact on pricing. 

 The main impact on the API Cost are:

- Browser Usage
- Proxy Pool
- Target/Shield
 
 You will find the [pricing grid](https://scrapfly.io/docs/scrape-api/billing?language=node_js#billing) for browser usage and proxy network type. Specific target/shield have fees and are not publicly documented, only very specific one have fees otherwise there is no additional cost (Those fees are displayed in the cost section on your log). To get the full detail of the cost, you can the [dedicated troubleshooting section](https://scrapfly.io/docs/scrape-api/troubleshoot?language=node_js#cost)

 To ensure predictability and control of your spending, we recommend creating an account and gradually monitoring the usage costs as you increase your volume. You can also the [use api budget](https://scrapfly.io/docs/scrape-api/getting-started?language=node_js#api_param_cost_budget) on scrape call `cost_budget=25` Once you have determined the actual cost, you can check our [set of tools](https://scrapfly.io/docs/scrape-api/getting-started?language=node_js#billing) to make it more predictable and ensure staying within budget.

> It's totally free on non-blocked scrapeIf you scrape various websites, and you don't know which is protected or not, just keep it enabled, no extra cost is applied on non-protected traffic.
> 
>  Furthermore, when ASP is enabled, a lot of things are automatically handled with [the fine-tuning of parameters](#usage) to prevent detection which result in saving.

## Integration

- [ASP example with Python SDK](https://scrapfly.io/docs/onboarding#asp)
 
## Related Errors

 All related errors are listed below. You can see full description and example of error response on the [Errors section](https://scrapfly.io/docs/scrape-api/errors#proxy). You can also check the [troubleshooting section](https://scrapfly.io/docs/scrape-api/troubleshoot?language=node_js) if you have timeout issue with ASP.

- [ERR::ASP::CAPTCHA\_ERROR](https://scrapfly.io/docs/scrape-api/error/ERR::ASP::CAPTCHA_ERROR "Something wrong happened with the captcha. We will figure out to fix the problem as soon as possible") - Something wrong happened with the captcha. We will figure out to fix the problem as soon as possible
- [ERR::ASP::CAPTCHA\_TIMEOUT](https://scrapfly.io/docs/scrape-api/error/ERR::ASP::CAPTCHA_TIMEOUT "The budgeted time to solve the captcha is reached") - The budgeted time to solve the captcha is reached
- [ERR::ASP::SHIELD\_ERROR](https://scrapfly.io/docs/scrape-api/error/ERR::ASP::SHIELD_ERROR "The ASP encounter an unexpected problem. We will fix it as soon as possible. Our team has been alerted") - The ASP encounter an unexpected problem. We will fix it as soon as possible. Our team has been alerted
- [ERR::ASP::SHIELD\_EXPIRED](https://scrapfly.io/docs/scrape-api/error/ERR::ASP::SHIELD_EXPIRED "The ASP shield previously set is expired, you must retry.") - The ASP shield previously set is expired, you must retry.
- [ERR::ASP::SHIELD\_NOT\_ELIGIBLE](https://scrapfly.io/docs/scrape-api/error/ERR::ASP::SHIELD_NOT_ELIGIBLE "The feature requested	is not eligible while using the ASP for the given protection/target") - The feature requested is not eligible while using the ASP for the given protection/target
- [ERR::ASP::SHIELD\_PROTECTION\_FAILED](https://scrapfly.io/docs/scrape-api/error/ERR::ASP::SHIELD_PROTECTION_FAILED "The ASP shield failed to solve the challenge against the anti scrapping protection") - The ASP shield failed to solve the challenge against the anti scrapping protection
- [ERR::ASP::TIMEOUT](https://scrapfly.io/docs/scrape-api/error/ERR::ASP::TIMEOUT "The ASP made too much time to solve or respond") - The ASP made too much time to solve or respond
- [ERR::ASP::UNABLE\_TO\_SOLVE\_CAPTCHA](https://scrapfly.io/docs/scrape-api/error/ERR::ASP::UNABLE_TO_SOLVE_CAPTCHA "Despite our effort, we were unable to solve the captcha. It can happened sporadically, please retry") - Despite our effort, we were unable to solve the captcha. It can happened sporadically, please retry
- [ERR::ASP::UPSTREAM\_UNEXPECTED\_RESPONSE](https://scrapfly.io/docs/scrape-api/error/ERR::ASP::UPSTREAM_UNEXPECTED_RESPONSE "The response given by the upstream after challenge resolution is not expected. Our team has been alerted") - The response given by the upstream after challenge resolution is not expected. Our team has been alerted
 
## Learning Resources

- [ How to Scrape Without Getting Blocked? In-Depth Tutorial ](https://scrapfly.io/blog/posts/how-to-scrape-without-getting-blocked-tutorial/)
- [ How to Avoid Web Scraper IP Blocking? ](https://scrapfly.io/blog/posts/how-to-avoid-web-scraping-blocking-ip-addresses/)
- [ How TLS Fingerprint is Used to Block Web Scrapers? ](https://scrapfly.io/blog/posts/how-to-avoid-web-scraping-blocking-tls/)
- [ How Javascript is Used to Block Web Scrapers? In-Depth Guide ](https://scrapfly.io/blog/posts/how-to-avoid-web-scraping-blocking-javascript/)
- [ How to Scrape Dynamic Websites Using Headless Web Browsers ](https://scrapfly.io/blog/posts/scraping-using-browsers/)