8
anti-bot vendors bypassed
98%
peak success rate on Cloudflare
1
parameter to enable: asp=True
0
charges on failed bypass attempts
Eight Vendors. One Parameter.
Select a vendor to see detailed bypass coverage, detection layers handled, and per-target configuration options.
One Flag, Every Language
Set asp=True once. Every official SDK passes it through identically.
One parameter enables the full ASP stack — TLS, HTTP/2, JS challenges, CAPTCHAs, behavioral biometrics handled server-side.
from scrapfly import ScrapeConfig, ScrapflyClient, ScrapeApiResponse
client = ScrapflyClient(key="API KEY")
api_response: ScrapeApiResponse = client.scrape(
ScrapeConfig(
url='https://httpbin.dev/html',
# bypass anti-scraping protection
asp=True
)
)
print(api_response.result)
import {
ScrapflyClient, ScrapeConfig
} from 'jsr:@scrapfly/scrapfly-sdk';
const client = new ScrapflyClient({ key: "API KEY" });
let api_result = await client.scrape(
new ScrapeConfig({
url: 'https://httpbin.dev/html',
// bypass anti-scraping protection
asp: true,
})
);
console.log(api_result.result);
http https://api.scrapfly.io/scrape \
key==$SCRAPFLY_KEY \
url==https://httpbin.dev/html \
asp==true
package main
import (
"fmt"
"github.com/scrapfly/go-scrapfly"
)
func main() {
client, _ := scrapfly.New("API KEY")
result, _ := client.Scrape(&scrapfly.ScrapeConfig{
URL: "https://httpbin.dev/html",
// bypass anti-scraping protection
ASP: true,
})
fmt.Println(result.Result.Content)
}
Two Engines, Automatic Routing
Scrapfly selects the right engine per target. Scrapium handles JavaScript-rendered challenges such as Cloudflare Turnstile, Kasada proof-of-work, and PerimeterX Human Challenge. Curlium handles HTTP-layer detection via JA4, HTTP/2 SETTINGS frames, and client hints. Both engines share the same Chrome identity across the session so detection systems never see a fingerprint change.
What ASP Handles Automatically
TLS fingerprinting, JS challenges, CAPTCHAs, behavioral biometrics - all solved server-side.
One Chrome Identity. Eight Anti-Bot Vendors. Zero Configuration.
ASP is not a library of per-vendor tricks. It is one patched Chrome stack - Curlium at the HTTP layer, Scrapium at the browser layer - serving a single coherent identity to every detection vendor. Bot Score, _abck, datadome, _px3, kas.js, reese84, TS cookies, aws-waf-token - every vendor sees the same authentic Chrome.
TLS and Network Fingerprints
JA3/JA4 signatures, HTTP/2 SETTINGS, cipher suites, and HTTP/3 QUIC transport parameters matched to real Chrome builds updated within 3 days of stable releases.
Challenge Solving
Turnstile, reCAPTCHA, slider, press-and-hold, proof-of-work. All handled automatically with no solver keys or third-party accounts required.
Smart Proxy Geolocation
Residential and datacenter pools across 100+ countries. Proxy location, Accept-Language, client hints, and timezone stay coherent so anti-bot systems never flag locale mismatches.
Session Intelligence
Unblocked sessions reused across requests. Cookies, tokens, and JA4 stay stable so target servers see a single coherent visitor throughout the scrape run.
Why Open-Source Bypass Libraries Fail
Every major anti-bot vendor pushes edge updates weekly to monthly. Hand-rolled bypass stacks lag, leak fingerprints, and break silently. Scrapfly tracks every upstream deploy.
| curl-impersonate | lags Chrome by months, no JS challenge support |
| cloudscraper | no Turnstile, no modern challenge types |
| undetected-chromedriver | JS-patched, toString() leaks expose the override |
| Playwright + stealth | Canvas/Audio/WebGL mismatches at C++ level |
| Scrapfly ASP | 94-98%, tracked daily, 8 vendors, no lag |
What Fingerprint Drift Costs You
A single mismatched signal mid-session collapses the whole run. This is the failure mode Scrapfly engineers against every day.
Docs, Tools, and Ready-Made Scrapers
Everything you need to go from zero to bypassing anti-bot protection in production.
ASP Documentation
Full reference for asp=True, per-vendor options, retry behavior, and credit costs.
Developer DocsAcademy
Interactive courses on web scraping, anti-bot systems, and fingerprint detection.
Start learningOpen-Source Scrapers
40+ production scrapers using asp=True on GitHub. Copy, paste, and customize.
Explore repoFingerprint Tools
JA3/JA4 checker, Canvas fingerprint tester, WebGL inspector, HTTP/2 analyzer.
Browse toolsSeamlessly integrate with frameworks & platforms
Plug Scrapfly into your favorite tools, or build custom workflows with our first-class SDKs.
LLM & RAG frameworks
Frequently Asked Questions
Can I bypass Cloudflare with asp=True?
Yes. Cloudflare Bot Management, Turnstile CAPTCHA, and JavaScript challenges are all handled automatically. The current success rate is 98%. Set asp=True and Scrapfly selects Scrapium (stealth Chromium) for JS challenges or Curlium (HTTP-layer) depending on what Cloudflare deploys on the target page. See the Cloudflare bypass page for detailed coverage.
How does asp=True work?
Setting asp=True activates Scrapfly's Anti-Scraping Protection layer. The system detects which anti-bot vendor is active on the target URL, then routes the request through the appropriate engine. Scrapium handles JavaScript-rendered challenges (Turnstile, proof-of-work, Human Challenge). Curlium handles HTTP-layer detection via JA4 fingerprints, HTTP/2 SETTINGS, and client hints. Both engines share the same Chrome identity so the session looks coherent to any detection system.
What happens if the bypass fails?
You are not charged. Failed bypass attempts do not consume API credits. The API returns a structured error response so you can log, alert, or retry. Transient failures on heavily protected targets are automatically retried within the same request before returning an error.
Which anti-bot vendors does ASP cover?
Cloudflare (98%), Akamai (97%), DataDome (96%), Imperva/Incapsula (96%), AWS WAF (96%), PerimeterX (95%), F5 BIG-IP (95%), and Kasada (94%). Each vendor has a dedicated page with technical details on which detection signals are handled. Use the coverage grid above to navigate to the relevant page.
Why do residential proxies alone not solve anti-bot protection?
Modern anti-bot systems inspect TLS fingerprints (JA3/JA4), HTTP/2 SETTINGS frames, Canvas and WebGL outputs, Navigator properties, and behavioral timing signals. Rotating to a residential IP fixes none of that. Scrapfly aligns all of these layers to a real Chrome build updated within 3 days of stable releases. The proxy pool is also coherent - Accept-Language, timezone, and client hints match the exit IP country so locale mismatches do not flag the request.
How much does anti-bot bypass cost?
ASP requests start at 30 credits per call. The exact cost scales with the target's complexity - JavaScript rendering, CAPTCHA solving, and session persistence each add credits. You only pay for successful responses. See pricing for the full credit table.
Can I test on my specific protected targets before committing?
Yes. The free plan includes 1,000 API credits with no credit card required. Set asp=True and test your exact targets. Failed requests are not charged, so the free credits measure real success rates on your actual workload with no risk.
Need the full scraping stack?
Anti-bot bypass is one layer of Scrapfly. The full Web Scraping API adds residential proxies, cloud browsers, AI extraction, and async webhooks on the same endpoint. Or go deeper: Scrapium for direct stealth Chromium control, Curlium for raw HTTP with perfect fingerprints, Extraction API for structured output from any HTML.