  // PRODUCT# The Best Web Unblocker

 Bypass any anti-bot system with Scrapfly's web unblocker. Add `asp=true` to any request and Scrapfly handles Cloudflare, DataDome, Akamai, PerimeterX, and 17 more automatically. One parameter, zero configuration.

##  One parameter. Every protection.
 Pay only for successful requests. 

- **Zero configuration.** Set `asp=true` - the right TLS fingerprint, proxy pool, and challenge strategy are chosen automatically for every target.
- **Moving from another unblocker?** We publish migration guides for [Bright Data Unblocker](https://scrapfly.io/compare/brightdata-alternative) and [ZenRows](https://scrapfly.io/compare/zenrows-alternative), and most teams port their code in a morning.
 
 [ Get Free API Key ](https://scrapfly.io/register) [ Developer Docs ](https://scrapfly.io/docs/scrape-api/anti-scraping-protection) 

 1,000 free credits. No credit card required. 

 





'; var lanesEl = el.querySelector('[data-lanes]'); // Engine slug → marketing-source SVG (filled silhouette, FA-style). // The two icon URLs are pre-resolved by Twig's asset() helper and // stored on the wrapper as data-attrs, so the  picks up the // correct CDN-prefixed URL in prod and the same-origin URL in dev // without the JS having to know about it. var engineIcons = { CURLIUM: el.dataset.curliumIcon, SCRAPIUM: el.dataset.scrapiumIcon }; function engineIconSrc (engineName) { return engineIcons[engineName] || ''; } function buildLane (idx, initialStep) { var job = pickJob(); var laneEl = document.createElement('div'); laneEl.className = 'anim-scrape__lane'; laneEl.innerHTML = '' + '' + '' + job.country + '/' + job.pool + '' + '`' + job.path + '`' + '' + stageNames\[initialStep\] + '' + '' + '

' + '

'; return { el: laneEl, engineIconEl: laneEl.querySelector('[data-engine-icon]'), geoEl: laneEl.querySelector('[data-geo]'), pathEl: laneEl.querySelector('[data-path]'), stageEl: laneEl.querySelector('[data-stage-text]'), elapsedEl: laneEl.querySelector('[data-elapsed]'), barEl: laneEl.querySelector('[data-bar]'), step: initialStep, cumul: 0, job: job }; } // Three lanes, each starting at a different stage so they look // genuinely parallel from the first frame (no staggered fade-in). var lanes = [buildLane(0, 0), buildLane(1, 1), buildLane(2, 2)]; lanes.forEach(function (l) { lanesEl.appendChild(l.el); }); // Live RPS readout — sums lane completions over a rolling window. var statRpsEl = el.querySelector('[data-rps]'); var completionTimes = []; function updateRps () { var now = Date.now(); completionTimes = completionTimes.filter(function (t) { return now - t &lt; 1000; }); statRpsEl.textContent = completionTimes.length === 0 ? '—' : String(completionTimes.length * 3); } function tickLane (lane) { var s = lane.step; var deltas = lane.job.deltas; if (s &lt; deltas.length) { var d = deltas[s][0] + Math.random() * (deltas[s][1] - deltas[s][0]); lane.cumul += d; lane.stageEl.textContent = stageNames[s]; lane.elapsedEl.textContent = Math.round(lane.cumul) + 'ms'; } lane.barEl.setAttribute('data-progress', String(s)); lane.step++; if (lane.step &gt; deltas.length) { // Cycle complete — record completion, then re-roll the entire // job (path + engine + geo + pool) so each lane keeps showing // realistic Scrapfly variety rather than reusing the same // engine/path forever. completionTimes.push(Date.now()); lane.step = 0; lane.cumul = 0; lane.job = pickJob(); lane.pathEl.textContent = lane.job.path; lane.geoEl.textContent = lane.job.country + '/' + lane.job.pool; lane.engineIconEl.src = engineIconSrc(lane.job.engine); lane.engineIconEl.alt = lane.job.engine; lane.engineIconEl.title = lane.job.engine; lane.stageEl.textContent = stageNames[0]; lane.elapsedEl.textContent = ''; } } // Each lane ticks on its own interval — staggered so they don't // synchronize over time (~480ms ± per-lane jitter). var intervals = lanes.map(function (lane, i) { // Phase-stagger first tick by 160ms × lane index. setTimeout(function () { tickLane(lane); }, i * 160); return setInterval(function () { tickLane(lane); }, 460 + i * 40); }); var rpsInterval = setInterval(updateRps, 250); // Return the first interval so the existing teardown contract holds. // (No teardown is currently invoked, but stay symmetrical with the // other drivers that all return one setInterval handle.) void intervals; void rpsInterval; return intervals[0]; }, browser: function (el) { el.innerHTML = 'CDP EVENTS ' + '
'; var feed = el.querySelector('[data-feed]'); // Each event carries a realistic [minDelta, maxDelta] in ms — the // gap from the *previous* event in the same request flight. Numbers // mirror what Chrome DevTools shows on a real CDP trace: tens of ms // between network events, hundreds for DOM/load milestones, ~10ms // for Input dispatch. Hardcoded timestamps are dropped from detail // strings so the feed-time column is the single source of truth. var events = [ ['Network.requestWillBeSent', 'GET web-scraping.dev/abc', [15, 25]], ['Page.frameStartedLoading', 'frame=main', [5, 15]], ['Network.responseReceived', 'status=200, type=document', [40, 120]], ['Page.domContentEventFired', 'frame=main', [180, 320]], ['Runtime.executionContextCreated', 'origin=web-scraping.dev', [10, 25]], ['DOM.documentUpdated', 'nodes=1,284', [20, 60]], ['Page.loadEventFired', 'frame=main', [120, 240]], ['Network.dataReceived', '124.3 KB', [15, 45]], ['Input.dispatchMouseEvent', 'click (842, 316)', [5, 15]] ]; var i = 0; // tCdp is the simulated CDP clock in ms, NOT wall-clock time. It // resets at the start of each request flight (every full cycle of // events) so the feed reads as a fresh trace, not a 30-minute // session log. var tCdp = 0; function tick () { var idx = i % events.length; if (idx === 0) tCdp = 0; var ev = events[idx]; var jitter = ev[2][0] + Math.random() * (ev[2][1] - ev[2][0]); tCdp += jitter; var dt = Math.round(tCdp) + 'ms'; var li = document.createElement('li'); li.innerHTML = '' + dt + '' + '' + ev\[0\] + '' + '' + ev\[1\] + ''; feed.insertBefore(li, feed.firstChild); while (feed.children.length &gt; 6) feed.removeChild(feed.lastChild); i++; } for (var k = 0; k &lt; 4; k++) tick(); return setInterval(tick, 950); }, screenshot: function (el) { el.innerHTML = 'CAPTURING ' + '' + '' + '

' + '

' + '

' + '' + 'PNG' + 'JPEG' + 'WEBP' + 'FULL PAGE' + '

' + '

'; var fmts = el.querySelectorAll('[data-fmt]'); var shutter = el.querySelector('[data-shutter]'); var spec = el.querySelector('[data-spec]'); var elapsed = el.querySelector('[data-elapsed]'); // Each format combines a realistic viewport spec, capture latency, // and resulting payload size. Numbers cross-checked against // Scrapfly screenshot benchmarks: PNG/JPEG/WEBP on 1920×1080 land // 180-400ms; full-page on a long article scrolls + stitches and // takes 700-1200ms. var presets = [ { dim: '1920×1080', size: '184 KB', latencyMs: [180, 320] }, { dim: '1920×1080', size: '92 KB', latencyMs: [160, 260] }, { dim: '1920×1080', size: '76 KB', latencyMs: [200, 360] }, { dim: '1920×6840', size: '1.4 MB', latencyMs: [780, 1180] } ]; var step = 0; var anim = null; function tick () { var p = presets[step]; var latency = Math.round(p.latencyMs[0] + Math.random() * (p.latencyMs[1] - p.latencyMs[0])); spec.textContent = p.dim + ' • ' + p.size; elapsed.textContent = latency + 'ms'; // Web Animations API for the shutter sweep — replaces a CSS // transition + offsetWidth-reflow restart trick. Each tick we // cancel the previous animation and run a fresh one; WAAPI keeps // the work on the compositor thread, so no main-thread reflow. if (anim) anim.cancel(); anim = shutter.animate( [{ width: '0%' }, { width: '100%' }], { duration: latency, easing: 'cubic-bezier(.2,.8,.2,1)', fill: 'forwards' } ); fmts.forEach(function (f, i) { f.classList.toggle('anim-screenshot__format--active', i === step); }); step = (step + 1) % fmts.length; } tick(); return setInterval(tick, 1500); }, extract: function (el) { el.innerHTML = 'SCHEMA HYDRATION ' + '' + '{ name: \_\_\_\_\_\_\_\_\_\_\_\_,
' + ' price: \_\_\_\_\_\_\_\_\_\_\_\_,
' + ' in\_stock: \_\_\_\_,
' + ' rating: \_\_\_\_ }' + '

'; var records = [ { name: '"Widget Pro"', price: '49.99', in_stock: 'true', rating: '4.7' }, { name: '"Acme Runner"', price: '129.00', in_stock: 'true', rating: '4.3' }, { name: '"Vintage Chair"', price: '340.00', in_stock: 'false', rating: '4.9' }, { name: '"Coffee Grinder"', price: '89.50', in_stock: 'true', rating: '4.6' } ]; var keys = ['name', 'price', 'in_stock', 'rating']; var stat = el.querySelector('[data-stat]'); // Counter that ticks up each completed record so the panel reads // as "ongoing batch extraction" rather than a single shot demo. var totalRecords = 0; var rec = 0, step = 0; function tick () { var key = keys[step]; var field = el.querySelector('[data-field="' + key + '"]'); if (field) { field.textContent = records[rec % records.length][key]; field.className = 'v v-new'; } step++; if (step &gt;= keys.length) { step = 0; rec++; totalRecords++; if (stat) stat.textContent = totalRecords.toLocaleString('en-US') + ' records • ~340ms/rec'; setTimeout(function () { keys.forEach(function (k) { var f = el.querySelector('[data-field="' + k + '"]'); if (!f) return; f.textContent = k === 'in_stock' || k === 'rating' ? '____' : '____________'; f.className = 'pending'; }); }, 600); } } // Faster field reveal — 250ms feels like a template extraction // (regex/CSS), not a slow LLM dribble. Total per-record: ~1s. return setInterval(tick, 250); }, crawl: function (el) { el.innerHTML = '' + '**0 urls discovered**' + 'depth 1/5 • 0 req/s' + '

' + '```
web-scraping.dev/
```

'; var countEl = el.querySelector('[data-count]'); var depthEl = el.querySelector('[data-depth]'); var rpsEl = el.querySelector('[data-rps]'); var treeEl = el.querySelector('[data-tree]'); var branches = [ '├─ /products (1,284 pages)', '│ ├─ /products/shoes (392)', '│ ├─ /products/bags (218)', '│ └─ /products/accessories (674)', '├─ /articles (3,902 pages)', '│ ├─ /articles/2024/', '│ └─ /articles/2025/', '├─ /reviews (8,401)', '└─ /sitemap.xml' ]; // Counter starts plausible, climbs by realistic-per-tick batches // (~10 req/s sustained = 65/tick at 650ms cadence; we vary per // tick to read as live discovery rather than a clock). var count = 1, branchIdx = 0, depth = 1; function tick () { var batch = 50 + Math.floor(Math.random() * 60); count += batch; countEl.textContent = count.toLocaleString('en-US'); // RPS oscillates around 8-15 — the typical Scrapfly crawler // throttle for a single seed under default politeness. rpsEl.textContent = String(8 + Math.floor(Math.random() * 8)); if (branchIdx &lt; branches.length) { treeEl.innerHTML += '\n' + branches[branchIdx]; branchIdx++; depth = Math.min(5, 1 + Math.floor(branchIdx / 2)); depthEl.textContent = String(depth); } else { setTimeout(function () { treeEl.innerHTML = 'web-scraping.dev/'; branchIdx = 0; depth = 1; count = 1; depthEl.textContent = '1'; countEl.textContent = '1'; }, 1400); branchIdx = branches.length + 1; } } return setInterval(tick, 650); } }; document.querySelectorAll('[data-hero-anim]').forEach(function (el) { var kind = el.getAttribute('data-hero-anim'); var driver = drivers[kind]; if (driver) driver(el); }); })(); 

 

 

---

## 1

parameter to enable full bypass: `asp=true`

 



 

## 8+

enterprise anti-bot systems bypassed

 



 

## 98%

success rate on Cloudflare-protected targets

 



 

## 0

credits charged for failed requests

 



 

 

 

---

 CAPABILITIES## Everything Solved Server-Side

No proxy management, no fingerprint tuning, no challenge integrations. One parameter does it all.

 

 ### The Bypass Pipeline

Every request with `asp=true` flows through an automatic sequence. Vendor detection is pattern-based and happens before any retry. Fingerprint coherence spans TLS (JA3/JA4), HTTP/2 SETTINGS, HTTP/3 QUIC params, browser runtime, and behavioral signals. Failed bypass retries do not bill.

  **Your Request** any SDK or raw HTTP - add `asp=true`, nothing else changes 

 

  **Vendor Detection** response patterns identify Cloudflare, DataDome, Akamai, PerimeterX, Kasada, Imperva, F5, AWS WAF automatically 

 

  **Fingerprint Build** TLS (JA3/JA4), HTTP/2 SETTINGS, HTTP/3 QUIC, browser runtime, behavioral signals - all coherent per profile 

 

  **Challenge Solve** JS challenges, Turnstile, puzzle captchas, FunCaptcha, GeeTest - server-side, free retries on failure 

 

  **Replay** original request replayed with the solved session and proxy auto-upgraded to residential if needed 

 

  **Result** real page HTML, headers, cookies - plus `log_url` for replay and HAR inspection 

 

 

  **[JA3/JA4](https://scrapfly.io/web-scraping-tools/ja3-fingerprint)** TLS fingerprint 

  **[HTTP/2](https://scrapfly.io/web-scraping-tools/http2-fingerprint)** SETTINGS frame 

  **HTTP/3** QUIC transport 

  **Behavioral** mouse + scroll 

  **Retries** free on fail 

 

[View ASP docs →](https://scrapfly.io/docs/scrape-api/anti-scraping-protection)

 



 

 

 ### Anti-Bot Coverage

Vendor detection is automatic - you do not configure which system is in play. The right bypass strategy is selected from response patterns.

 [Cloudflare](https://scrapfly.io/bypass/cloudflare) 

 [DataDome](https://scrapfly.io/bypass/datadome) 

 [Akamai](https://scrapfly.io/bypass/akamai) 

 [PerimeterX](https://scrapfly.io/bypass/perimeterx) 

 [Kasada](https://scrapfly.io/bypass/kasada) 

 [Imperva](https://scrapfly.io/bypass/incapsula) 

 [F5](https://scrapfly.io/bypass/f5) 

 [AWS WAF](https://scrapfly.io/bypass/aws-waf) 

 

**8+**systems covered

**auto**vendor detection

**1**parameter to enable

 

[Browse all bypass guides →](https://scrapfly.io/bypass)

 



 

 ### Pay Only For Success

Credits are consumed only when a target returns a usable response. Timeouts, upstream errors, and challenges that could not be solved cost nothing. ASP may auto-upgrade the proxy pool to residential when required - the final cost reflects the pool actually used.

0cost on failure

**auto**retry logic

**fair**billing

 

**shared**credit pool

**1 plan**all products

 

 



 

 

 ### Fingerprint Coherence

A single layer mismatch is enough to trigger a block. The bypass stack patches every signal that modern detection systems cross-check.

  **TLS - JA3/JA4** cipher suites, extensions, elliptic curves match reference Chrome 

 

  **HTTP/2 SETTINGS** frame order, window sizes, HPACK header ordering 

 

  **HTTP/3 QUIC** transport params, stream priorities match Chrome QUIC 

 

  **Browser Runtime** navigator, WebGL, canvas, audio - coherent per profile 

 

  **Behavioral Signals** cursor pathing, scroll velocity, timing patterns 

 

 

 



 

 ### Challenge Types Handled

Challenges are solved server-side without any extra integration. All challenge solving is part of the `asp=true` flow at no additional cost.

JS challenges

 

Turnstile

 

Puzzle captchas

 

FunCaptcha

 

Slider captchas

 

GeeTest

 

 

  **Server-side** no client integration 

  **Free retries** failed solves don't bill 

  **No extra cost** included in `asp=true` 

 

 



 

 

 ### Full Observability, Out Of The Box

Every request returns a `log_url`. Inspect the full request and response: headers, cookies, rendered HTML, screenshots, HAR waterfall. Replay with one click. Debug blocked requests without guesswork.

  **HAR** waterfall 

  **1-click** replay 

  **Screenshot** per request 

  **Live** monitoring 

 

 



 

 ### 190+ Country Proxies

Residential and datacenter proxy pools with auto-rotation, session stickiness, and geo-targeting. Country, region, or city - one parameter per level. The ASP stack upgrades to residential automatically when a target requires it.

**190+**countries

**auto**pool upgrade

 

 



 

 

 ### Migrating From Another Unblocker

Already using another tool? Step-by-step migration guides with side-by-side code samples. Most teams port over in a morning.

- [Bright Data Unblocker →](https://scrapfly.io/compare/brightdata-alternative)
- [ZenRows →](https://scrapfly.io/compare/zenrows-alternative)
- [ScraperAPI →](https://scrapfly.io/compare/scraperapi-alternative)
- [Compare all →](https://scrapfly.io/compare)
 
 



 

 ### Persistent Sessions

Maintain cookie jars and auth state across requests. Combine with `asp=true` for logged-in scraping sessions that stay alive as long as you need.

**cookie**jar persistence

**auth**state across calls

 

 



 

 ### Not Sure Which Anti-Bot Protects Your Target?

Run the Antibot Detector first. It identifies the active protection vendor from a URL so you can pick the right strategy before writing any code.

 [Run the detector →](https://scrapfly.io/products/antibot-detector) 



 

 

// RELATED PRODUCTS

 

 [ // WEB SCRAPING API **All-in-one scraping API** Anti-bot bypass plus JS rendering, AI extraction, proxy rotation, and crawling - composable on one endpoint. View docs → ](https://scrapfly.io/products/web-scraping-api) 

 [ // ANTIBOT DETECTOR **Identify the protection first** Enter a URL and get the active anti-bot vendor, challenge type, and recommended bypass approach back instantly. View docs → ](https://scrapfly.io/products/antibot-detector) 

 [ // SCRAPIUM **Browser-level stealth internals** Chromium patched at the C++ engine level. 4,000+ fingerprint signals rewritten across WebGL, Canvas, Navigator, and more. View docs → ](https://scrapfly.io/scrapium) 

 [ // CURLIUM **Low-level HTTP stealth** Patched curl fork with BoringSSL, nghttp3, and ngtcp2. JA4, HTTP/2 SETTINGS, QUIC transport params match reference Chrome exactly. View docs → ](https://scrapfly.io/curlium) 

 

 

---

 CODE## One Line to Bypass Anti-Bot

The simplest entry point. Enable `asp=true` in any language.

 

 [ Anti-Bot Bypass ](#ub-strat-asp) [ With JS Rendering ](#ub-strat-browsers) [ Geo Targeting ](#ub-strat-proxies) [ Persistent Sessions ](#ub-strat-session) 

Cloudflare, DataDome, Akamai, PerimeterX, and more. See the full [bypass catalog](https://scrapfly.io/"/bypass").

     Python TypeScript HTTP / cURL Go  

     

 ```
from scrapfly import ScrapeConfig, ScrapflyClient, ScrapeApiResponse
client = ScrapflyClient(key="API KEY")

api_response: ScrapeApiResponse = client.scrape(
    ScrapeConfig(
        url='https://httpbin.dev/html',
        # bypass anti-scraping protection
        asp=True
    )
)
print(api_response.result)
```

 ```
import { 
    ScrapflyClient, ScrapeConfig 
} from 'jsr:@scrapfly/scrapfly-sdk';

const client = new ScrapflyClient({ key: "API KEY" });
let api_result = await client.scrape(
    new ScrapeConfig({
        url: 'https://httpbin.dev/html',
        // bypass anti-scraping protection
        asp: true,
    })
);
console.log(api_result.result);
```

 ```
http https://api.scrapfly.io/scrape \
key==$SCRAPFLY_KEY \
url==https://httpbin.dev/html \
asp==true
```

 ```
package main

import (
	"fmt"
	"github.com/scrapfly/go-scrapfly"
)

func main() {
	client, _ := scrapfly.New("API KEY")
	result, _ := client.Scrape(&scrapfly.ScrapeConfig{
		URL: "https://httpbin.dev/html",
		// bypass anti-scraping protection
		ASP: true,
	})
	fmt.Println(result.Result.Content)
}
```

 

 

 [ Python SDK docs → ](https://scrapfly.io/docs/sdk/python) [ TypeScript SDK docs → ](https://scrapfly.io/docs/sdk/typescript) [ HTTP API docs → ](https://scrapfly.io/docs) [ Go SDK docs → ](https://scrapfly.io/docs/sdk/python) 

 

Add `render_js=true` for JavaScript-heavy targets.

     Python TypeScript HTTP / cURL  

    

 ```
from scrapfly import ScrapeConfig, ScrapflyClient, ScrapeApiResponse
client = ScrapflyClient(key="API KEY")

api_response: ScrapeApiResponse = client.scrape(
    ScrapeConfig(
        url='https://web-scraping.dev/reviews',
        # enable the use of cloud browers
        render_js=True,
        # wait for specific element to appear
        wait_for_selector=".review",
        # or wait set amount of time
        rendering_wait=3_000,  # 3 seconds
    )
)


print(api_response.result)
```

 ```
import { 
    ScrapflyClient, ScrapeConfig 
} from 'jsr:@scrapfly/scrapfly-sdk';

const client = new ScrapflyClient({ key: "API KEY" });

let api_result = await client.scrape(
    new ScrapeConfig({
        url: 'https://web-scraping.dev/reviews',
        // enable the use of cloud browers
        render_js: true,
        // wait for specific element to appear
        wait_for_selector: ".review",
        // or wait set amount of time
        rendering_wait: 3_000,  // 3 seconds
    })
);

console.log(JSON.stringify(api_result.result));
```

 ```
http https://api.scrapfly.io/scrape \
key==$SCRAPFLY_KEY \
url==https://web-scraping.dev/reviews \
render_js==true \
wait_for_selector==.review \
rendering_wait==3000
```

 

 

 [ Python SDK docs → ](https://scrapfly.io/docs/sdk/python) [ TypeScript SDK docs → ](https://scrapfly.io/docs/sdk/typescript) [ HTTP API docs → ](https://scrapfly.io/docs) 

 

Pick residential or datacenter, select country, stay in-region.

     Python TypeScript HTTP / cURL  

    

 ```
from scrapfly import ScrapeConfig, ScrapflyClient, ScrapeApiResponse
client = ScrapflyClient(key="API KEY")

api_response: ScrapeApiResponse = client.scrape(
    ScrapeConfig(
        url='https://httpbin.dev/html',
        # choose proxy countries
        country="US,CA",
        # residential or datacenter proxies
        proxy_pool="public_residential_pool"
    )
)
print(api_response.result)
```

 ```
import { 
    ScrapflyClient, ScrapeConfig 
} from 'jsr:@scrapfly/scrapfly-sdk';

const client = new ScrapflyClient({ key: "API KEY" });
let api_result = await client.scrape(
    new ScrapeConfig({
        url: 'https://web-scraping.dev/product/1',
        // choose proxy countries
        country: "US,CA",
        // residential or datacenter proxies
        proxy_pool: "public_residential_pool"
    })
);
console.log(api_result.result);
```

 ```
http https://api.scrapfly.io/scrape \
key==$SCRAPFLY_KEY \
url==https://httpbin.dev/html \
country=="US,CA" \
proxy_pool=="public_residential_pool"
```

 

 

 [ Python SDK docs → ](https://scrapfly.io/docs/sdk/python) [ TypeScript SDK docs → ](https://scrapfly.io/docs/sdk/typescript) [ HTTP API docs → ](https://scrapfly.io/docs) 

 

Keep cookies and auth state across requests.

     Python TypeScript HTTP / cURL  

    

 ```
from scrapfly import ScrapeConfig, ScrapflyClient, ScrapeApiResponse
client = ScrapflyClient(key="API KEY")

api_response: ScrapeApiResponse = client.scrape(
    ScrapeConfig(
        url='https://web-scraping.dev/product/1',
        # add unique identifier to start a session
        session="mysession123",
    )
)

# resume session
api_response2: ScrapeApiResponse = client.scrape(
    ScrapeConfig(
        url='https://web-scraping.dev/product/1',
        session="mysession123",
        # sessions can be shared between browser and http requests
        # render_js = True,   # enable browser for this session
    )
)
print(api_response2.result)
```

 ```
import { 
    ScrapflyClient, ScrapeConfig 
} from 'jsr:@scrapfly/scrapfly-sdk';

const client = new ScrapflyClient({ key: "API KEY" });

let api_result = await client.scrape(
    new ScrapeConfig({
        url: 'https://web-scraping.dev/product/1',
        // add unique identifier to start a session
        session: "mysession123",
    })
);

// resume session
let api_result2 = await client.scrape(
    new ScrapeConfig({
        url: 'https://web-scraping.dev/product/1',
        session: "mysession123",
        // sessions can be shared between browser and http requests
        // render_js: true,   // enable browser for this session
    })
);
console.log(JSON.stringify(api_result2.result));
```

 ```
# start session
http https://api.scrapfly.io/scrape \
key==$SCRAPFLY_KEY \
url==https://web-scraping.dev/product/1 \
session=mysession123 

# resume session
http https://api.scrapfly.io/scrape \
key==$SCRAPFLY_KEY \
url==https://web-scraping.dev/product/1 \
session=mysession123
```

 

 

 [ Python SDK docs → ](https://scrapfly.io/docs/sdk/python) [ TypeScript SDK docs → ](https://scrapfly.io/docs/sdk/typescript) [ HTTP API docs → ](https://scrapfly.io/docs) 

 

 

 

---

 LEARN## Docs, Bypass Guides, And Tools

Everything you need to go from zero to production anti-bot bypass.

 

 ### ASP Docs

Full reference for `asp=true`: parameters, retry behavior, supported systems.

 [ Developer Docs → ](https://scrapfly.io/docs/scrape-api/anti-scraping-protection) 



 

 ### Bypass Guides

Per-system deep-dives: Cloudflare, DataDome, Akamai, PerimeterX, and more.

 [ Browse guides → ](https://scrapfly.io/bypass) 



 

 ### Academy

Interactive courses on anti-bot, web scraping fundamentals, and proxy management.

 [ Start learning → ](https://scrapfly.io/academy) 



 

 ### Fingerprint Tools

JA3 checker, TLS fingerprint tester, canvas fingerprint, HTTP/2 inspector.

 [ Browse tools → ](https://scrapfly.io/web-scraping-tools) 



 

 

 

---

  // INTEGRATIONS## Seamlessly integrate with frameworks &amp; platforms

Plug Scrapfly into your favorite tools, or build custom workflows with our first-class SDKs.

 ### No-code automation

 [  Zapier ](https://scrapfly.io/integration/zapier) [  Make ](https://scrapfly.io/integration/make) [  n8n ](https://scrapfly.io/integration/n8n) 

 

### LLM &amp; RAG frameworks

 [  LlamaIndex ](https://scrapfly.io/integration/llamaindex) [  LangChain ](https://scrapfly.io/integration/langchain) [  CrewAI ](https://scrapfly.io/integration/crewai) 

 

### First-class SDKs

 [  Python pip install scrapfly-sdk ](https://scrapfly.io/docs/sdk/python) [  TypeScript Node, Deno, Bun ](https://scrapfly.io/docs/sdk/typescript) [  Go go get scrapfly-sdk ](https://scrapfly.io/docs/sdk/golang) [  Rust cargo add scrapfly-sdk ](https://scrapfly.io/docs/sdk/rust) [  Scrapy Full-feature extension ](https://scrapfly.io/docs/sdk/scrapy) 

 

 

 [ See all integrations  ](https://scrapfly.io/integration) 

 

---

  FAQ## Frequently Asked Questions

 

  ### What is Scrapfly Unblocker?

 Scrapfly Unblocker is the `asp=true` parameter exposed as a focused product. Set it on any scrape request and the API automatically selects the right TLS fingerprint, proxy pool, browser profile, and challenge-handling strategy for the target. Cloudflare, DataDome, Akamai, PerimeterX, Kasada, Imperva, F5, and AWS WAF are all handled server-side. Your code stays unchanged.

 

   ### How is it different from using a proxy?

 A proxy forwards traffic through a different IP. Unblocker does that and far more: it rotates TLS fingerprints (JA3/JA4), injects correct browser headers, solves JavaScript challenges, handles CAPTCHAs, and validates that the response is the actual page and not a block page. A bare proxy fails as soon as a site checks fingerprints or issues a challenge. Unblocker keeps going.

 

   ### Which anti-bot systems are supported?

 Cloudflare (JS Challenge, Turnstile, 5-second shield), DataDome (device check, slider), Akamai (Bot Manager, sensor data), PerimeterX/HUMAN (press-and-hold), Kasada (proof-of-work), Imperva/Incapsula (reese84), F5 BIG-IP (ASM, Shape Security), and AWS WAF (Bot Control). See the [bypass guides](https://scrapfly.io/"/bypass") for per-system detail.

 

   ### Does it solve CAPTCHAs automatically?

 Yes. reCAPTCHA v2/v3, Turnstile, puzzle-click captchas, FunCaptcha, and GeeTest are solved server-side with no extra integration and no extra cost. CAPTCHA solving is part of the `asp=true` flow.

 

   ### What do I get charged for?

 Only successful responses. If the target returns a block page, a timeout, or an upstream error, the request costs zero credits. Retries are free. You pay for data received, not attempts made.

 

   ### Can I migrate from Bright Data Unblocker or ZenRows?

 Yes. The API shape is different, but the concepts (URL, key, anti-bot flag, proxy selection) are the same. We publish dedicated migration guides with side-by-side code samples, mapping each parameter you use today to its Scrapfly equivalent. See the [Bright Data migration guide](https://scrapfly.io/"/compare/brightdata-alternative") and the [ZenRows migration guide](https://scrapfly.io/"/compare/zenrows-alternative") for the walkthrough.

 

   ### Is web scraping legal?

 Scraping publicly accessible data is legal in most jurisdictions - Meta v. Bright Data and hiQ v. LinkedIn have established strong precedent. You are responsible for respecting robots.txt, rate limits, and target terms of service. See [our legal overview](https://scrapfly.io/"/is-web-scraping-legal") for details.

 

  

 

  ---

 // PRICING## Transparent, usage-based pricing

One plan covers the full Scrapfly platform. Pick a monthly credit budget; every API shares the same credit pool. No per-product lock-in, no surprise line items.

 

  **Free tier**1,000 free credits on signup. No credit card required.

 

 

  **Pay on success**You only pay for successful requests. Failed calls are free.

 

 

  **No lock-in**Upgrade, downgrade, or cancel anytime. No contract.

 

 

 

 [ See pricing  ](https://scrapfly.io/pricing) [ Start free ](https://scrapfly.io/register) 

 

 

### Need more than `asp=true`? The full stack is one API.

 Unblocker is the simplest entry point. Every layer is available on the same plan: [Web Scraping API](https://scrapfly.io/products/web-scraping-api) for the batteries-included experience, [Browser API](https://scrapfly.io/products/cloud-browser-api) for hosted Chrome with Playwright / Puppeteer, [Extraction API](https://scrapfly.io/products/extraction-api) for structured output from any HTML, [Crawler API](https://scrapfly.io/products/crawler-api) for multi-page crawls at scale, [Scrapium](https://scrapfly.io/scrapium) for stealth Chromium direct access, [Curlium](https://scrapfly.io/curlium) for byte-perfect HTTP. Not sure what the target uses? [Detect the antibot](https://scrapfly.io/products/antibot-detector) first, then pick the right tool from our [bypass catalog](https://scrapfly.io/bypass).

 

 [Get Free API Key](https://scrapfly.io/register)1,000 free credits. No card.