Anti Scraping Protection (ASP)

overview page of web interface
ASP tab of log inspection
Service disruption on this feature can occur, regardless of our will. Protection evolves; we need to find and adapt our solution and take days up to weeks to get a production-grade solution. As soon as you use this feature, you must keep it in mind and develop your software by considering it.

It's a technology we developed to bypass anti-scraping protection.

When ASP is triggered, it takes control to resolve, deciding whether to enable or disable JS rendering, allowing the session to solve the captcha, and so forth.

Once the challenge to the protection is resolved, ASP will be triggered each time you revisit the site and inject the correct mechanisms to avoid a new challenge yet again. Therefore, the first request to solve the challenge can take several seconds (regarding the challenge type, from 30 to 120 seconds). Once the first scrape is done, the next will be as fast as usual.

You have nothing to do; our services will automatically manage the ASP, automatically starting it if the captcha or anti-bot solution is detected on the website.

You have nothing to do; our services will automatically manage the ASP, automatically starting it if the captcha or anti-bot solution is detected on the website. We won't play cat and mouse games with anti-bot solutions. We will not explicitly enumerate services we can handle. We pass many solutions, from simple captcha to the most advanced anti-bot solution on the market. We also develop a specific solution for a dedicated popular website. If you want to know more, you can ask us via chat on the screen's bottom-left.

Each time ASP detects and resolves a challenge (captcha or anti-bot solution), a session is created, even if you don't have a session enabled. It ensures all cookies are applied correctly without taking care of them. It will be invisible from your point of view.


Scrapfly ASP auto-resolve many captcha systems automatically

Following captcha system are currently supported:

  • Google Recaptcha
  • Hcaptcha
  • Geetest
We continuously add more captcha providers and update our solution over time.

Anti Bot

Scrapfly detects and resolves challenges from well-known anti-scraping solutions on the market. Scrapfly also supports custom solutions on popular websites. Anti-bot bypass are transparent, no extra work on your side. You directly retrieve the successful response.

Keep in mind anti-bot solutions evolve, and we may need to adapt our bypass technics; this is why you should handle ASP errors correctly when relying on ASP.

If you plan to target a protected website, you should try many configurations, with or without browser, with residential proxies. If you send POST request with body, you must configure headers and content type to mimic a real call.

Despite all your attempts, if you are still blocked - you can contact us through the chat to investigate.


When ASP is enabled, captcha and anti-bot vendor are automatically detected. Each kind protection have different implementation of challenge.context and challenge.result.

var request = require('request');
var options = {
  'method': 'GET',
  'url': ''

request(options, function (error, response) {
  if (error) throw new Error(error);

key  = "" 
url  = "" 
asp  = "true" 

Example of response

    "context": {
        "asp": {
            "identity": "baea954b95731c68ae6e45bd1e252eb4560cdc45",
            "session_identity": "3375a5d49895472125d73bd5c89032afd0a24909",
            "shield_name": "captcha",
            "success": true,
            "error": null,
            "challenge": {
                "done": true,
                "success": true,
                "state": "solved",
                "context": {
                    "site_key": "6Le-wvkSAAAAAPBMRTvw0Q4Muexq9bi0DJwx_mJ-",
                    "token": "03AGdBq25At22qXblbpHjyOPzhoaRbyAMbpfDk17DJcCegDDf4it8zP8X2_6AsHDebS3yAAXN9AtwmDfikBDbPZlFdHA1d1O08X6sLp3yN7a6-nnjQ1XxHerksQb-xJ41p8dfTnO1CE8xr6GsHL9Y0uTUmv_9xFcgnpi1zlkRYCYQlUDg7JJAcSJxFHCPnm0J_aKk3LAQOyP8Lgw_zeYRrY6bzjaYGh9_5Yi8F73Z7-qyQWijIWExJansQsArdHCR8e5HaMVe3Lfe07evFXRDw6_7NaIMtj8hyctRMD-GKvCZlwCC6vs-rJtQtuRnxJKahZmhHkXZvpvqo8tzcKIOSCDLrHUwdm8j5m11G0UbEIARqyeZukX19B9l8MnAhP05Qu3STrgD3R8Mkqn0RCLdPdzpKGTqyYm5GLuzuV9LLSsWVuS_aYcssGKSUbrZHOikfP_dC4erg6N6FsJ1Jt7d4UbiGIXgbAKzRQsFDvmYaEgwl1lVd3WhiMIEh6NCUabS3qnDOdPBy8ewaceb3Opw9brEngV0fan3A3Bn3K3vbperKk2wxmWqKfbk90ua-ErIt2ygopD8f5z8mdib6aJvp6SDStJrcmu2AnhdA3eL0NubUi4nsTGpjlmXotm1MXVHMzWKNHoh5W0XLrSc3nYIKbmmNS4XrHT_wan2Kudz9icCew2v7EZHTmjFlJmVg_RdeXDWkiUEf5KuQtrsrpv195OyyUv_ucmi36Bg04dF45e5-cgp8Svu-sU5q9LHxkcu4wzRM7bqOXqmPLaDx2feERrjPwx6zyYi-O0xAP5xzCwul-VMCR3es-pWfr6ovrd0YiXpZ2L-9KpXLlJD0Hq3y9kezRLSq_xhoRsU8IvptN5jI612G8LMuoorkZLnKsiZNEzmuUoNXPbvhaSlloDRAHPchuBnfzSUtEjTO3WBZ9Qto5xvkR-kYQS_V6KKeYdPUNV6Dvemg-XZX81M2gzt8O4pt2MlG2DyKzv_DEZlf65bzT9G2_sBGwLtUvIGC7cDWolBiQJxoEfOJXI68oXpLyZLx-HZX5BGSbvc29ShuccRLrzAx55N4x6-vjDSOybkz4ZsJURHPqdge6jDGSV-TIlDVcCnxDLdFBs3F52vnWl9WdpHhVGAwJQh590LmBJj4C2kny-1XRFOKkLflNLVZLUmiBH0rqXQUnT1ybNxBDHWXj2wlDp1UD4HVHDbnpXIvAG9JquXqhtelQmAysORyItGrJGy5HYOY_VM6ALWjn0behuy6yRD-KzWEVV_WE_mP3vJs_kaNyW9EswUKY0hirdbB2aZ8sQMGy2bSFekb5aKpcqOuvYWN6v-7BKN_6St_MsO0A-CMZW4-hpvK-AVvaydF3ljKLqMsd_hAyL2yhytpsRgVJVx6HfgZPvwkUQwz2FljeUhWCYLxzIJ9_Jvd2MEOWnj2neg69HG-gDkCJyKztOo16mo5Tew",
                    "url": "",
                    "user_agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/537.36"
                "type": "recaptcha",
                "result": {
                    "solution": {
                        "token": "03AGdBq241V6gvHLko7LO8UdhqTPMADuN0-QqrD8fycF4F6h8ioJ9PMF4D2LTcNSeyukdLJ-qrBIrotjnwl3h9dDpp0GyzXpom_b2VQQYrBs-sjv9uUttqQ_wIG0P0yOygrmOc7iZRhnfdF4Nr1Rd_UXXuXFBBHzLxlcid4PsZu5rWF9R1LDsD1Fhyhy6F6gQKk49gpDa-5DOtZ_yNsJaLTjvR8sNPwCJCf_71OHKe6iir3QHAb256_EOE5QjWK36gpzjpMeLNNQz6eZ6sZ4y2wkyff3dGQZ7MuWP65OIoLQJmsdGSKNerGXNZe8z7YqWi_CT4h2x6nOpPbylCH11lyKjniCo6PSR3Ytwhqc2lLtQ5MdQcbdRMXk0SejTvihIloQr96YP7QVWyIRXvierh8Faxctw7j_OW9AblrBY3KxKsvpeK_n4zfIA0zp0HbDwYoKByUxjIt-qfNT0xmMXC4i4NWdziXm6fm36tuBgi-N35CrwwhbmKgXpB5mT4XyEtjG2wMkSS16Wg4nmCUFPo_F6obB3DfRoIjB7jQ-yswj-McwDIYuHpZgIBU1hYaIWyUTAjBn8PN9b8ZIx27ZVEI0L2e_EGvH174PNjAE5Lyn4OI6qaJLZJyXW0nyLdOxklkSkDgjJcT9stuRqGKlmhNohnTwEYoZ-gk3ijklyG1Jf6QK4CjyiXAirxLOtooDIQ0VpBHbMhbasNsej637UYXYYL9mpYeSNho9GNGPuanhqAfk3wAud5pmArfc0t8_qxkMcSyLc5ZICtEZWtTJGnEoKsniALvkNvIl9N-K2UOZg6JU7sPFyaypCxRLO2ybWPNoibXxJTuYdPH0KU8eGbBdyQUUk105XK2dfVBg4KEfvYJTm6NID-c-flixXGL697kyZOV_9Rn3RbGcGO1_AyhlOsmYJOg5r5FIvxn1hofmYJG99-rUYzFvj3p-3h09fNyEnTI2PGjwpXH9FezvZMCXMbvnEZ45zbxvJ7X8cMudooYADG7vViDQAF72WkGEIWR8lFdmAfgYzmRgzXwoBsRVeeiiVYczd9ImLTh4a7BCWg0TeLfO7ptCyXRokaSA8PhS1WhH_OR-ofgHng3zK3sh5MMvWIznm466mcYuyQxqbfzQHYxeyWlhjw3w7jFK_cpWxnZmXIon5waCCQwJWXDDFcqZDNspY5aJI1iANA7bgI2T1WUNfu2MaoICVgPO1Krn8j7cANz-f4na3S0pEYN02wo70wdY9Yof7P4fEz04OdvJsSq2ZtCs9LSCcm9gRNq8mVIIkdijHQyuL4AUAa4gq_ig_9tHu2BJ0J4EmAcJUy1jTWnwrZycSqDi9w3YEz8yLzIod3FuWCpcpU8aQ9eGGCQfWJXDMHwBjXv_CF-sWJ-R1WfzfcOo9j5XbHHH9sEYv80LFEBTfEtkklLlaymlmB2kBCvSWW3LQU1ZlUy0D-13mfvkWPdoc"
                    "error": null,
                    "duration": 102.1

All related errors are listed below. You can see full description and example of error response on the Errors section.

Maximize Your Success Rate

Network Quality

Most of the time, datacenter IPs are good enough, but on websites protected by anti-bot vendors, they check the origin of IP if the traffic is coming from a datacenter or regular connection. By residential network, you will get an IP with a better reputation and registered under a regular ASN (which is used to control the origin of the IP). Learn how to change the network type.

Verify Cookies and Headers

Observe headers/cookies of regular calls that successful; you can figure out if you need to add extra headers or retrieve specific cookies to be auth. You can use the dev tool and inspect the network activity.

Navigation Coherence

You might need to retrieve cookies from navigation before calling unofficial API. The easiest way to achieve that is to scrape by enabling session and rendering JS to retrieve cookies and then you can scrape without rendering js; cookies are now stored in your session and applied back.

Geo Blocking

Some websites block navigation based on IP location; by default, Scrapfly select a random country from the pool, specify the country regarding the location of the website could help. Learn how to do it


Pricing is not easy to predict with Anti Scraping Protection. Everything is designed and optimized to reduce the cost and maximize the reuse of authenticated sessions (which is free even if protection is activated). Be aware protected sites at scale have a real cost; the best way to budget your volume is to take an account and monitor the cost usage while increasing the volume to avoid surprises. We try to be transparent as much as we can on this subject because we know you need to predict and budget the price of the solution; if you have any questions or feedback, please contact us.

Pricing Grid

Scenario API Call Cost
ASP not triggered 0
ASP + Residential Proxies 25
ASP + Residential Proxies + Browser 25 + 5 = 30
ASP + Datacenter Proxies 1
ASP + Datacenter Proxies + Browser 6

ASP is only billed on success response however to prevent abuse 404, 410, 401, 405, 406, 407, 409, 411, 412, 413, 414, 415, 416, 417, 418, 422, 424, 426, 428, 456 are billed since these errors are triggered from bad user configuration.

Therefore, if you are not sure if a website is protected, you can enable ASP. If nothing is blocking, no extra calls are counted.

You can try to target the desired website through our API player by creating a free account.
API Response contains header X-Scrapfly-Api-Cost indicate you the billed amount and X-Scrapfly-Remaining-Scrape indicate the remaining amount of Scrape API Call

Some specific protected domain have a special price due to high protection, you can ask through support or test via player to show how much credit are billed