// PRODUCT

Web Unblocker

One parameter. Every anti-bot. Zero configuration. Add asp=true and Scrapfly handles Cloudflare, DataDome, Akamai, PerimeterX, and more - automatically.

One parameter. Every protection.
Pay only for successful requests.

  • Zero configuration. Set asp=true - the right TLS fingerprint, proxy pool, and challenge strategy are chosen automatically for every target.
  • Moving from another unblocker? We publish migration guides for Bright Data Unblocker and ZenRows, and most teams port their code in a morning.
1,000 free credits. No credit card required.

1

parameter to enable full bypass: asp=true

8+

enterprise anti-bot systems bypassed

98%

success rate on Cloudflare-protected targets

0

credits charged for failed requests


CAPABILITIES

Everything Solved Server-Side

No proxy management, no fingerprint tuning, no challenge integrations. One parameter does it all.

The Bypass Pipeline

Every request with asp=true flows through an automatic sequence. Vendor detection is pattern-based and happens before any retry. Fingerprint coherence spans TLS (JA3/JA4), HTTP/2 SETTINGS, HTTP/3 QUIC params, browser runtime, and behavioral signals. Failed bypass retries do not bill.

Your Request any SDK or raw HTTP - add asp=true, nothing else changes
Vendor Detection response patterns identify Cloudflare, DataDome, Akamai, PerimeterX, Kasada, Imperva, F5, AWS WAF automatically
Fingerprint Build TLS (JA3/JA4), HTTP/2 SETTINGS, HTTP/3 QUIC, browser runtime, behavioral signals - all coherent per profile
Challenge Solve JS challenges, Turnstile, puzzle captchas, FunCaptcha, GeeTest - server-side, free retries on failure
Replay original request replayed with the solved session and proxy auto-upgraded to residential if needed
Result real page HTML, headers, cookies - plus log_url for replay and HAR inspection
JA3/JA4 TLS fingerprint
HTTP/2 SETTINGS frame
HTTP/3 QUIC transport
Behavioral mouse + scroll
Retries free on fail

View ASP docs →

Anti-Bot Coverage

Vendor detection is automatic - you do not configure which system is in play. The right bypass strategy is selected from response patterns.

8+systems covered
autovendor detection
1parameter to enable

Browse all bypass guides →

Pay Only For Success

Credits are consumed only when a target returns a usable response. Timeouts, upstream errors, and challenges that could not be solved cost nothing. ASP may auto-upgrade the proxy pool to residential when required - the final cost reflects the pool actually used.

0cost on failure
autoretry logic
fairbilling
sharedcredit pool
1 planall products

Fingerprint Coherence

A single layer mismatch is enough to trigger a block. The bypass stack patches every signal that modern detection systems cross-check.

TLS - JA3/JA4 cipher suites, extensions, elliptic curves match reference Chrome
HTTP/2 SETTINGS frame order, window sizes, HPACK header ordering
HTTP/3 QUIC transport params, stream priorities match Chrome QUIC
Browser Runtime navigator, WebGL, canvas, audio - coherent per profile
Behavioral Signals cursor pathing, scroll velocity, timing patterns

Challenge Types Handled

Challenges are solved server-side without any extra integration. All challenge solving is part of the asp=true flow at no additional cost.

JS challenges
Turnstile
Puzzle captchas
FunCaptcha
Slider captchas
GeeTest
Server-side no client integration
Free retries failed solves don't bill
No extra cost included in asp=true

Full Observability, Out Of The Box

Every request returns a log_url. Inspect the full request and response: headers, cookies, rendered HTML, screenshots, HAR waterfall. Replay with one click. Debug blocked requests without guesswork.

HAR waterfall
1-click replay
Screenshot per request
Live monitoring

190+ Country Proxies

Residential and datacenter proxy pools with auto-rotation, session stickiness, and geo-targeting. Country, region, or city - one parameter per level. The ASP stack upgrades to residential automatically when a target requires it.

190+countries
autopool upgrade

Migrating From Another Unblocker

Already using another tool? Step-by-step migration guides with side-by-side code samples. Most teams port over in a morning.

Persistent Sessions

Maintain cookie jars and auth state across requests. Combine with asp=true for logged-in scraping sessions that stay alive as long as you need.

cookiejar persistence
authstate across calls

Not Sure Which Anti-Bot Protects Your Target?

Run the Antibot Detector first. It identifies the active protection vendor from a URL so you can pick the right strategy before writing any code.

Run the detector →

CODE

One Line to Bypass Anti-Bot

The simplest entry point. Enable asp=true in any language.

Cloudflare, DataDome, Akamai, PerimeterX, and more. See the full bypass catalog.

from scrapfly import ScrapeConfig, ScrapflyClient, ScrapeApiResponse
client = ScrapflyClient(key="API KEY")

api_response: ScrapeApiResponse = client.scrape(
    ScrapeConfig(
        url='https://httpbin.dev/html',
        # bypass anti-scraping protection
        asp=True
    )
)
print(api_response.result)
import { 
    ScrapflyClient, ScrapeConfig 
} from 'jsr:@scrapfly/scrapfly-sdk';

const client = new ScrapflyClient({ key: "API KEY" });
let api_result = await client.scrape(
    new ScrapeConfig({
        url: 'https://httpbin.dev/html',
        // bypass anti-scraping protection
        asp: true,
    })
);
console.log(api_result.result);
http https://api.scrapfly.io/scrape \
key==$SCRAPFLY_KEY \
url==https://httpbin.dev/html \
asp==true
package main

import (
	"fmt"
	"github.com/scrapfly/go-scrapfly"
)

func main() {
	client, _ := scrapfly.New("API KEY")
	result, _ := client.Scrape(&scrapfly.ScrapeConfig{
		URL: "https://httpbin.dev/html",
		// bypass anti-scraping protection
		ASP: true,
	})
	fmt.Println(result.Result.Content)
}

Add render_js=true for JavaScript-heavy targets.

from scrapfly import ScrapeConfig, ScrapflyClient, ScrapeApiResponse
client = ScrapflyClient(key="API KEY")

api_response: ScrapeApiResponse = client.scrape(
    ScrapeConfig(
        url='https://web-scraping.dev/reviews',
        # enable the use of cloud browers
        render_js=True,
        # wait for specific element to appear
        wait_for_selector=".review",
        # or wait set amount of time
        rendering_wait=3_000,  # 3 seconds
    )
)


print(api_response.result)
import { 
    ScrapflyClient, ScrapeConfig 
} from 'jsr:@scrapfly/scrapfly-sdk';

const client = new ScrapflyClient({ key: "API KEY" });

let api_result = await client.scrape(
    new ScrapeConfig({
        url: 'https://web-scraping.dev/reviews',
        // enable the use of cloud browers
        render_js: true,
        // wait for specific element to appear
        wait_for_selector: ".review",
        // or wait set amount of time
        rendering_wait: 3_000,  // 3 seconds
    })
);

console.log(JSON.stringify(api_result.result));
http https://api.scrapfly.io/scrape \
key==$SCRAPFLY_KEY \
url==https://web-scraping.dev/reviews \
render_js==true \
wait_for_selector==.review \
rendering_wait==3000

Pick residential or datacenter, select country, stay in-region.

from scrapfly import ScrapeConfig, ScrapflyClient, ScrapeApiResponse
client = ScrapflyClient(key="API KEY")

api_response: ScrapeApiResponse = client.scrape(
    ScrapeConfig(
        url='https://httpbin.dev/html',
        # choose proxy countries
        country="US,CA",
        # residential or datacenter proxies
        proxy_pool="public_residential_pool"
    )
)
print(api_response.result)
import { 
    ScrapflyClient, ScrapeConfig 
} from 'jsr:@scrapfly/scrapfly-sdk';

const client = new ScrapflyClient({ key: "API KEY" });
let api_result = await client.scrape(
    new ScrapeConfig({
        url: 'https://web-scraping.dev/product/1',
        // choose proxy countries
        country: "US,CA",
        // residential or datacenter proxies
        proxy_pool: "public_residential_pool"
    })
);
console.log(api_result.result);
http https://api.scrapfly.io/scrape \
key==$SCRAPFLY_KEY \
url==https://httpbin.dev/html \
country=="US,CA" \
proxy_pool=="public_residential_pool"

Keep cookies and auth state across requests.

from scrapfly import ScrapeConfig, ScrapflyClient, ScrapeApiResponse
client = ScrapflyClient(key="API KEY")

api_response: ScrapeApiResponse = client.scrape(
    ScrapeConfig(
        url='https://web-scraping.dev/product/1',
        # add unique identifier to start a session
        session="mysession123",
    )
)

# resume session
api_response2: ScrapeApiResponse = client.scrape(
    ScrapeConfig(
        url='https://web-scraping.dev/product/1',
        session="mysession123",
        # sessions can be shared between browser and http requests
        # render_js = True,   # enable browser for this session
    )
)
print(api_response2.result)
import { 
    ScrapflyClient, ScrapeConfig 
} from 'jsr:@scrapfly/scrapfly-sdk';

const client = new ScrapflyClient({ key: "API KEY" });

let api_result = await client.scrape(
    new ScrapeConfig({
        url: 'https://web-scraping.dev/product/1',
        // add unique identifier to start a session
        session: "mysession123",
    })
);

// resume session
let api_result2 = await client.scrape(
    new ScrapeConfig({
        url: 'https://web-scraping.dev/product/1',
        session: "mysession123",
        // sessions can be shared between browser and http requests
        // render_js: true,   // enable browser for this session
    })
);
console.log(JSON.stringify(api_result2.result));
# start session
http https://api.scrapfly.io/scrape \
key==$SCRAPFLY_KEY \
url==https://web-scraping.dev/product/1 \
session=mysession123 

# resume session
http https://api.scrapfly.io/scrape \
key==$SCRAPFLY_KEY \
url==https://web-scraping.dev/product/1 \
session=mysession123

LEARN

Docs, Bypass Guides, And Tools

Everything you need to go from zero to production anti-bot bypass.

ASP Docs

Full reference for asp=true: parameters, retry behavior, supported systems.

Developer Docs →

Bypass Guides

Per-system deep-dives: Cloudflare, DataDome, Akamai, PerimeterX, and more.

Browse guides →

Academy

Interactive courses on anti-bot, web scraping fundamentals, and proxy management.

Start learning →

Fingerprint Tools

JA3 checker, TLS fingerprint tester, canvas fingerprint, HTTP/2 inspector.

Browse tools →

// INTEGRATIONS

Seamlessly integrate with frameworks & platforms

Plug Scrapfly into your favorite tools, or build custom workflows with our first-class SDKs.


FAQ

Frequently Asked Questions

What is Scrapfly Unblocker?

Scrapfly Unblocker is the asp=true parameter exposed as a focused product. Set it on any scrape request and the API automatically selects the right TLS fingerprint, proxy pool, browser profile, and challenge-handling strategy for the target. Cloudflare, DataDome, Akamai, PerimeterX, Kasada, Imperva, F5, and AWS WAF are all handled server-side. Your code stays unchanged.

How is it different from using a proxy?

A proxy forwards traffic through a different IP. Unblocker does that and far more: it rotates TLS fingerprints (JA3/JA4), injects correct browser headers, solves JavaScript challenges, handles CAPTCHAs, and validates that the response is the actual page and not a block page. A bare proxy fails as soon as a site checks fingerprints or issues a challenge. Unblocker keeps going.

Which anti-bot systems are supported?

Cloudflare (JS Challenge, Turnstile, 5-second shield), DataDome (device check, slider), Akamai (Bot Manager, sensor data), PerimeterX/HUMAN (press-and-hold), Kasada (proof-of-work), Imperva/Incapsula (reese84), F5 BIG-IP (ASM, Shape Security), and AWS WAF (Bot Control). See the bypass guides for per-system detail.

Does it solve CAPTCHAs automatically?

Yes. reCAPTCHA v2/v3, Turnstile, puzzle-click captchas, FunCaptcha, and GeeTest are solved server-side with no extra integration and no extra cost. CAPTCHA solving is part of the asp=true flow.

What do I get charged for?

Only successful responses. If the target returns a block page, a timeout, or an upstream error, the request costs zero credits. Retries are free. You pay for data received, not attempts made.

Can I migrate from Bright Data Unblocker or ZenRows?

Yes. The API shape is different, but the concepts (URL, key, anti-bot flag, proxy selection) are the same. We publish dedicated migration guides with side-by-side code samples, mapping each parameter you use today to its Scrapfly equivalent. See the Bright Data migration guide and the ZenRows migration guide for the walkthrough.

Is web scraping legal?

Scraping publicly accessible data is legal in most jurisdictions - Meta v. Bright Data and hiQ v. LinkedIn have established strong precedent. You are responsible for respecting robots.txt, rate limits, and target terms of service. See our legal overview for details.


// PRICING

Transparent, usage-based pricing

One plan covers the full Scrapfly platform. Pick a monthly credit budget; every API shares the same credit pool. No per-product lock-in, no surprise line items.

Free tier

1,000 free credits on signup. No credit card required.

Pay on success

You only pay for successful requests. Failed calls are free.

No lock-in

Upgrade, downgrade, or cancel anytime. No contract.

Need more than asp=true? The full stack is one API.

Unblocker is the simplest entry point. Every layer is available on the same plan: Web Scraping API for the batteries-included experience, Browser API for hosted Chrome with Playwright / Puppeteer, Extraction API for structured output from any HTML, Crawler API for multi-page crawls at scale, Scrapium for stealth Chromium direct access, Curlium for byte-perfect HTTP. Not sure what the target uses? Detect the antibot first, then pick the right tool from our bypass catalog.

Get Free API Key
1,000 free credits. No card.