  // PRODUCT# The Best Antibot Detector

 Detect which antibot system protects any website with Scrapfly's free antibot detector. Point it at any URL and instantly identify Cloudflare, DataDome, Akamai, and 20+ more. Chrome extension or REST API, no sign-up required.

##  Detects Cloudflare, DataDome, Akamai and 20+ more.
 Free forever. Open source. 

- **Instant signal analysis.** Reads response headers, cookie names, JS challenge patterns, and TLS fingerprints to identify the exact antibot vendor protecting a site.
- **Two ways to use it.** Install the Chrome extension for passive browser-based detection, or call the REST endpoint from any script or CI pipeline.
 
 [ Get Free API Key ](https://scrapfly.io/register) [  Install Extension ](https://chromewebstore.google.com/detail/scrapfly/pdpakdgmjhkfimgaihlgaaiijlbilkca) 

 1,000 free credits. No credit card required. 

 





'; var lanesEl = el.querySelector('[data-lanes]'); // Engine slug → marketing-source SVG (filled silhouette, FA-style). // The two icon URLs are pre-resolved by Twig's asset() helper and // stored on the wrapper as data-attrs, so the  picks up the // correct CDN-prefixed URL in prod and the same-origin URL in dev // without the JS having to know about it. var engineIcons = { CURLIUM: el.dataset.curliumIcon, SCRAPIUM: el.dataset.scrapiumIcon }; function engineIconSrc (engineName) { return engineIcons[engineName] || ''; } function buildLane (idx, initialStep) { var job = pickJob(); var laneEl = document.createElement('div'); laneEl.className = 'anim-scrape__lane'; laneEl.innerHTML = '' + '' + '' + job.country + '/' + job.pool + '' + '`' + job.path + '`' + '' + stageNames\[initialStep\] + '' + '' + '

' + '

'; return { el: laneEl, engineIconEl: laneEl.querySelector('[data-engine-icon]'), geoEl: laneEl.querySelector('[data-geo]'), pathEl: laneEl.querySelector('[data-path]'), stageEl: laneEl.querySelector('[data-stage-text]'), elapsedEl: laneEl.querySelector('[data-elapsed]'), barEl: laneEl.querySelector('[data-bar]'), step: initialStep, cumul: 0, job: job }; } // Three lanes, each starting at a different stage so they look // genuinely parallel from the first frame (no staggered fade-in). var lanes = [buildLane(0, 0), buildLane(1, 1), buildLane(2, 2)]; lanes.forEach(function (l) { lanesEl.appendChild(l.el); }); // Live RPS readout — sums lane completions over a rolling window. var statRpsEl = el.querySelector('[data-rps]'); var completionTimes = []; function updateRps () { var now = Date.now(); completionTimes = completionTimes.filter(function (t) { return now - t &lt; 1000; }); statRpsEl.textContent = completionTimes.length === 0 ? '—' : String(completionTimes.length * 3); } function tickLane (lane) { var s = lane.step; var deltas = lane.job.deltas; if (s &lt; deltas.length) { var d = deltas[s][0] + Math.random() * (deltas[s][1] - deltas[s][0]); lane.cumul += d; lane.stageEl.textContent = stageNames[s]; lane.elapsedEl.textContent = Math.round(lane.cumul) + 'ms'; } lane.barEl.setAttribute('data-progress', String(s)); lane.step++; if (lane.step &gt; deltas.length) { // Cycle complete — record completion, then re-roll the entire // job (path + engine + geo + pool) so each lane keeps showing // realistic Scrapfly variety rather than reusing the same // engine/path forever. completionTimes.push(Date.now()); lane.step = 0; lane.cumul = 0; lane.job = pickJob(); lane.pathEl.textContent = lane.job.path; lane.geoEl.textContent = lane.job.country + '/' + lane.job.pool; lane.engineIconEl.src = engineIconSrc(lane.job.engine); lane.engineIconEl.alt = lane.job.engine; lane.engineIconEl.title = lane.job.engine; lane.stageEl.textContent = stageNames[0]; lane.elapsedEl.textContent = ''; } } // Each lane ticks on its own interval — staggered so they don't // synchronize over time (~480ms ± per-lane jitter). var intervals = lanes.map(function (lane, i) { // Phase-stagger first tick by 160ms × lane index. setTimeout(function () { tickLane(lane); }, i * 160); return setInterval(function () { tickLane(lane); }, 460 + i * 40); }); var rpsInterval = setInterval(updateRps, 250); // Return the first interval so the existing teardown contract holds. // (No teardown is currently invoked, but stay symmetrical with the // other drivers that all return one setInterval handle.) void intervals; void rpsInterval; return intervals[0]; }, browser: function (el) { el.innerHTML = 'CDP EVENTS ' + '
'; var feed = el.querySelector('[data-feed]'); // Each event carries a realistic [minDelta, maxDelta] in ms — the // gap from the *previous* event in the same request flight. Numbers // mirror what Chrome DevTools shows on a real CDP trace: tens of ms // between network events, hundreds for DOM/load milestones, ~10ms // for Input dispatch. Hardcoded timestamps are dropped from detail // strings so the feed-time column is the single source of truth. var events = [ ['Network.requestWillBeSent', 'GET web-scraping.dev/abc', [15, 25]], ['Page.frameStartedLoading', 'frame=main', [5, 15]], ['Network.responseReceived', 'status=200, type=document', [40, 120]], ['Page.domContentEventFired', 'frame=main', [180, 320]], ['Runtime.executionContextCreated', 'origin=web-scraping.dev', [10, 25]], ['DOM.documentUpdated', 'nodes=1,284', [20, 60]], ['Page.loadEventFired', 'frame=main', [120, 240]], ['Network.dataReceived', '124.3 KB', [15, 45]], ['Input.dispatchMouseEvent', 'click (842, 316)', [5, 15]] ]; var i = 0; // tCdp is the simulated CDP clock in ms, NOT wall-clock time. It // resets at the start of each request flight (every full cycle of // events) so the feed reads as a fresh trace, not a 30-minute // session log. var tCdp = 0; function tick () { var idx = i % events.length; if (idx === 0) tCdp = 0; var ev = events[idx]; var jitter = ev[2][0] + Math.random() * (ev[2][1] - ev[2][0]); tCdp += jitter; var dt = Math.round(tCdp) + 'ms'; var li = document.createElement('li'); li.innerHTML = '' + dt + '' + '' + ev\[0\] + '' + '' + ev\[1\] + ''; feed.insertBefore(li, feed.firstChild); while (feed.children.length &gt; 6) feed.removeChild(feed.lastChild); i++; } for (var k = 0; k &lt; 4; k++) tick(); return setInterval(tick, 950); }, screenshot: function (el) { el.innerHTML = 'CAPTURING ' + '' + '' + '

' + '

' + '

' + '' + 'PNG' + 'JPEG' + 'WEBP' + 'FULL PAGE' + '

' + '

'; var fmts = el.querySelectorAll('[data-fmt]'); var shutter = el.querySelector('[data-shutter]'); var spec = el.querySelector('[data-spec]'); var elapsed = el.querySelector('[data-elapsed]'); // Each format combines a realistic viewport spec, capture latency, // and resulting payload size. Numbers cross-checked against // Scrapfly screenshot benchmarks: PNG/JPEG/WEBP on 1920×1080 land // 180-400ms; full-page on a long article scrolls + stitches and // takes 700-1200ms. var presets = [ { dim: '1920×1080', size: '184 KB', latencyMs: [180, 320] }, { dim: '1920×1080', size: '92 KB', latencyMs: [160, 260] }, { dim: '1920×1080', size: '76 KB', latencyMs: [200, 360] }, { dim: '1920×6840', size: '1.4 MB', latencyMs: [780, 1180] } ]; var step = 0; var anim = null; function tick () { var p = presets[step]; var latency = Math.round(p.latencyMs[0] + Math.random() * (p.latencyMs[1] - p.latencyMs[0])); spec.textContent = p.dim + ' • ' + p.size; elapsed.textContent = latency + 'ms'; // Web Animations API for the shutter sweep — replaces a CSS // transition + offsetWidth-reflow restart trick. Each tick we // cancel the previous animation and run a fresh one; WAAPI keeps // the work on the compositor thread, so no main-thread reflow. if (anim) anim.cancel(); anim = shutter.animate( [{ width: '0%' }, { width: '100%' }], { duration: latency, easing: 'cubic-bezier(.2,.8,.2,1)', fill: 'forwards' } ); fmts.forEach(function (f, i) { f.classList.toggle('anim-screenshot__format--active', i === step); }); step = (step + 1) % fmts.length; } tick(); return setInterval(tick, 1500); }, extract: function (el) { el.innerHTML = 'SCHEMA HYDRATION ' + '' + '{ name: \_\_\_\_\_\_\_\_\_\_\_\_,
' + ' price: \_\_\_\_\_\_\_\_\_\_\_\_,
' + ' in\_stock: \_\_\_\_,
' + ' rating: \_\_\_\_ }' + '

'; var records = [ { name: '"Widget Pro"', price: '49.99', in_stock: 'true', rating: '4.7' }, { name: '"Acme Runner"', price: '129.00', in_stock: 'true', rating: '4.3' }, { name: '"Vintage Chair"', price: '340.00', in_stock: 'false', rating: '4.9' }, { name: '"Coffee Grinder"', price: '89.50', in_stock: 'true', rating: '4.6' } ]; var keys = ['name', 'price', 'in_stock', 'rating']; var stat = el.querySelector('[data-stat]'); // Counter that ticks up each completed record so the panel reads // as "ongoing batch extraction" rather than a single shot demo. var totalRecords = 0; var rec = 0, step = 0; function tick () { var key = keys[step]; var field = el.querySelector('[data-field="' + key + '"]'); if (field) { field.textContent = records[rec % records.length][key]; field.className = 'v v-new'; } step++; if (step &gt;= keys.length) { step = 0; rec++; totalRecords++; if (stat) stat.textContent = totalRecords.toLocaleString('en-US') + ' records • ~340ms/rec'; setTimeout(function () { keys.forEach(function (k) { var f = el.querySelector('[data-field="' + k + '"]'); if (!f) return; f.textContent = k === 'in_stock' || k === 'rating' ? '____' : '____________'; f.className = 'pending'; }); }, 600); } } // Faster field reveal — 250ms feels like a template extraction // (regex/CSS), not a slow LLM dribble. Total per-record: ~1s. return setInterval(tick, 250); }, crawl: function (el) { el.innerHTML = '' + '**0 urls discovered**' + 'depth 1/5 • 0 req/s' + '

' + '```
web-scraping.dev/
```

'; var countEl = el.querySelector('[data-count]'); var depthEl = el.querySelector('[data-depth]'); var rpsEl = el.querySelector('[data-rps]'); var treeEl = el.querySelector('[data-tree]'); var branches = [ '├─ /products (1,284 pages)', '│ ├─ /products/shoes (392)', '│ ├─ /products/bags (218)', '│ └─ /products/accessories (674)', '├─ /articles (3,902 pages)', '│ ├─ /articles/2024/', '│ └─ /articles/2025/', '├─ /reviews (8,401)', '└─ /sitemap.xml' ]; // Counter starts plausible, climbs by realistic-per-tick batches // (~10 req/s sustained = 65/tick at 650ms cadence; we vary per // tick to read as live discovery rather than a clock). var count = 1, branchIdx = 0, depth = 1; function tick () { var batch = 50 + Math.floor(Math.random() * 60); count += batch; countEl.textContent = count.toLocaleString('en-US'); // RPS oscillates around 8-15 — the typical Scrapfly crawler // throttle for a single seed under default politeness. rpsEl.textContent = String(8 + Math.floor(Math.random() * 8)); if (branchIdx &lt; branches.length) { treeEl.innerHTML += '\n' + branches[branchIdx]; branchIdx++; depth = Math.min(5, 1 + Math.floor(branchIdx / 2)); depthEl.textContent = String(depth); } else { setTimeout(function () { treeEl.innerHTML = 'web-scraping.dev/'; branchIdx = 0; depth = 1; count = 1; depthEl.textContent = '1'; countEl.textContent = '1'; }, 1400); branchIdx = branches.length + 1; } } return setInterval(tick, 650); } }; document.querySelectorAll('[data-hero-anim]').forEach(function (el) { var kind = el.getAttribute('data-hero-anim'); var driver = drivers[kind]; if (driver) driver(el); }); })(); 

 

 

---

## 26+

antibot vendors detected automatically

 



 

## 4

signal layers - headers, cookies, JS, TLS

 



 

## 100%

local detection - no data leaves your browser

 



 

## Free

open source, NPOSL-3.0, always free

 



 

 

 

---

 DETECTION## Every Major Antibot, Identified on Sight

Headers, cookies, JS challenges, and TLS fingerprints - analyzed together for a confident match.

 

 ### Detection Pipeline

Point the detector at any URL. It fetches the target with a real browser profile, collects evidence across four signal layers, and matches the fingerprint against known vendor signatures. Result: vendor name, confidence score, matched signals, and a recommended bypass path - all in one response.

  **URL input** any target, no sign-up 

  **Probe fetch** headers, cookies, body, TLS 

  **Signal match** 26+ vendor signatures 

  **Confidence score** per matched evidence layer 

 

  **URL Input** target URL submitted via Chrome extension toolbar or REST endpoint 

 

  **Fetch + Probes** HTTP response headers, Set-Cookie names, response body scan, TLS handshake metadata 

 

  **Fingerprint Signals** cf-ray, \_abck, datadome cookie, challenge JS src, x-kasada-pow header, sensor script paths 

 

  **Vendor Match** signals scored against 26+ vendor signature sets; highest confidence wins 

 

  **Confidence + Recommendation** vendor name, confidence level, matched signals, link to bypass guide for that vendor 

 

 

 [Cloudflare](https://scrapfly.io/bypass/cloudflare) 

 [DataDome](https://scrapfly.io/bypass/datadome) 

 [Akamai](https://scrapfly.io/bypass/akamai) 

 [PerimeterX](https://scrapfly.io/bypass/perimeterx) 

 [Kasada](https://scrapfly.io/bypass/kasada) 

 [Imperva](https://scrapfly.io/bypass/incapsula) 

 [F5](https://scrapfly.io/bypass/f5) 

 [AWS WAF](https://scrapfly.io/bypass/aws-waf) 

 Arkose Labs 

 Shape Security 

 Queue-it 

 Reblaze 

 

[View full bypass catalog →](https://scrapfly.io/bypass)

 



 

 

 ### Multi-Layer Signal Analysis

Detection evaluates four independent evidence layers. A single cookie name may indicate a vendor; header plus cookie plus JS challenge pattern is a confident match. Confidence is scored per layer and combined.

  **Headers** `cf-ray`, `x-kasada-pow`, `server` 

  **Cookies** `cf_clearance`, `_abck`, `datadome` 

  **JS body** challenge scripts, sensor iframe src 

  **TLS** JA3/JA4 fingerprint patterns 

 

[JA3 checker](https://scrapfly.io/web-scraping-tools/ja3-fingerprint)

[HTTP/2 inspector](https://scrapfly.io/web-scraping-tools/http2-fingerprint)

[Lies detector](https://scrapfly.io/web-scraping-tools/lies-detector)

[All tools](https://scrapfly.io/web-scraping-tools)

 

 



 

 ### Chrome Extension

Install once. Detection runs passively on every page load and shows a badge on the toolbar icon. Click for a full breakdown with confidence scores and the exact signals that triggered each match.

  **Passive** runs on every load 

  **12h cache** LRU, in-browser only 

 

confidence scores

signal breakdown

per-domain history

export results

 

 [  Install Extension ](https://chromewebstore.google.com/detail/scrapfly/pdpakdgmjhkfimgaihlgaaiijlbilkca)

 



 

 

 ### REST API Endpoint

Call the same detection engine from a script, CI job, or scraper bootstrapper. `GET /api/antibot-detector?url=...` returns a JSON object with vendor name, confidence level, and the signals that produced the verdict.

  **Single call** one HTTP request 

  **JSON output** vendor, confidence, signals 

 

vendor field

confidence level

signal list

bypass hint

 

[View API docs →](https://scrapfly.io/docs/tools/antibot-detector)

 



 

 ### Browser Fingerprinting Exposure

Beyond vendor detection, the extension exposes which browser fingerprinting surfaces a target site reads - Canvas, WebGL, AudioContext, WebRTC, and more. Useful for privacy research, browser-hardening audits, and understanding what stealth a scraper needs.

 [Canvas](https://scrapfly.io/web-scraping-tools/canvas-fingerprint) 

 [WebGL](https://scrapfly.io/web-scraping-tools/webgl-fingerprint) 

 [Audio](https://scrapfly.io/web-scraping-tools/audio-fingerprint) 

 [WebRTC](https://scrapfly.io/web-scraping-tools/webrtc-leak) 

 [Fonts](https://scrapfly.io/web-scraping-tools/fonts) 

 [Navigator](https://scrapfly.io/web-scraping-tools/lies-detector) 

 [Codecs](https://scrapfly.io/web-scraping-tools/media-codecs) 

 [DRM](https://scrapfly.io/web-scraping-tools/drm-capabilities) 

 

 



 

 

 ### Open Source

Full source on GitHub under NPOSL-3.0. Submit PRs to add new vendor signatures, improve accuracy, or fix false positives. No black box.

**NPOSL-3.0**license

**GitHub**open PRs welcome

 

[View on GitHub →](https://github.com/scrapfly/Antibot-Detector)

 



 

 ### 100% Local, Zero Telemetry

All analysis runs in your browser. No browsing history, no URL list, no results are sent anywhere. Verifiable via the public source code.

**In-browser**no server calls

**Verifiable**open source

 

 



 

 ### Detection History

All detections are stored locally in the extension. Browse past results by domain, filter by vendor, and export the dataset for offline analysis or reporting.

**Local store**stays in browser

**Export**CSV / JSON

 

 



 

 

 ### Built for Engineers Diagnosing Blocks

The detector is step zero in any scraping workflow that targets a protected site. Know the vendor, then pick the right tool.

 SCRAPERS **Diagnose before you code** Identify the antibot before building your scraper so you choose the right HTTP client, proxy pool, and fingerprint strategy from the start. 

 

 SECURITY RESEARCH **Vendor coverage audits** Map which antibot stacks appear across a domain portfolio. Export the detection log for reporting or peer review. 

 

 CI PIPELINES **Pre-scrape gating** Call the REST endpoint at pipeline start. If the target changed vendors, route to the correct bypass strategy before any credits are spent. 

 

 

 



 

 

 ### Detect. Then Bypass.

The detector points you to the right tool. Each product below targets the full bypass workflow for a different engineering profile.

 [ // UNBLOCKER **One-flag bypass** Add `asp=true`. Scrapfly detects the vendor automatically and routes through the matching bypass stack. ](https://scrapfly.io/products/unblocker) 

 [ // WEB SCRAPING API **Full pipeline, one endpoint** Proxies, browser rendering, JS execution, extraction, and bypass - all composable on `/scrape`. ](https://scrapfly.io/products/web-scraping-api) 

 [ // SCRAPIUM **Anti-detect Chromium** 4,000+ fingerprint signals patched at C++ level. Drive with Playwright, Puppeteer, or Selenium against the most protected targets. ](https://scrapfly.io/scrapium) 

 [ // CURLIUM **Byte-perfect HTTP client** Patched curl with BoringSSL and nghttp3. JA4, HTTP/2 SETTINGS, and QUIC transport params match reference Chrome exactly. ](https://scrapfly.io/curlium) 

 

 



 

 

 ### Vendor Coverage

The signature database covers the most widely deployed antibot platforms in production use today.

**26+**vendors detected

**12**major platforms

 

 



 

 ### Signal Types

Four independent evidence channels are evaluated and combined. More layers matching the same vendor raises confidence.

**4**signal layers

**Headers**Cookies, JS, TLS

 

 



 

 ### Detection Speed

The extension uses a 12-hour domain cache and LRU pattern matching. Repeated page loads cost near-zero compute inside the browser.

**12h**domain cache TTL

**Async**non-blocking

 

 



 

 

 

---

 CODE## Detect Antibots in One Request

Call the same detection engine from any language. No SDK needed.

 

Pass a URL, get back the antibot vendor + confidence score. Then see the [bypass catalog](https://scrapfly.io/"/bypass") for the right tool.

     Python TypeScript HTTP / cURL  

    

 ```
import requests

result = requests.get(
    'https://api.scrapfly.io/tools/antibot-detector',
    params={
        'url': 'https://example.com',
        'key': os.getenv('SCRAPFLY_KEY'),
    }
).json()
# e.g. {"detected": "cloudflare", "confidence": "high"}
print(result['detected'])
```

 ```
// Detect antibot system used by a target URL
const response = await fetch(
    'https://api.scrapfly.io/tools/antibot-detector?' +
    new URLSearchParams({
        url: 'https://example.com',
        key: Deno.env.get('SCRAPFLY_KEY') ?? '',
    })
);
const result = await response.json();
console.log(result.detected); // e.g. "cloudflare" or "none"
```

 ```
curl -s "https://api.scrapfly.io/tools/antibot-detector?\
url=https://example.com\
&key=$SCRAPFLY_KEY" | jq .detected
```

 

 

 [ Python SDK docs → ](https://scrapfly.io/docs/sdk/python) [ TypeScript SDK docs → ](https://scrapfly.io/docs/sdk/typescript) [ HTTP API docs → ](https://scrapfly.io/docs) 

 

 

 

---

 LEARN## Understand Every Antibot You Encounter

Detection is step one. Our docs and guides take you the rest of the way.

 

 ### API Reference

Full docs for the antibot detector REST endpoint, response schema, and signal definitions.

 [ Developer Docs → ](https://scrapfly.io/docs/tools/antibot-detector) 



 

 ### Academy

Interactive courses on anti-bot systems, how they work, and how scrapers bypass them.

 [ Start learning → ](https://scrapfly.io/academy) 



 

 ### Extension Source

Read the detection rules, contribute new vendor signatures, or fork and customize for your workflow.

 [ View on GitHub → ](https://github.com/scrapfly/Antibot-Detector) 



 

 ### Developer Tools

JA3/JA4 checker, TLS fingerprint inspector, HTTP/2 analyzer, and more - all free in the toolbox.

 [ Browse tools → ](https://scrapfly.io/web-scraping-tools) 



 

 

 

---

  // INTEGRATIONS## Seamlessly integrate with frameworks &amp; platforms

Plug Scrapfly into your favorite tools, or build custom workflows with our first-class SDKs.

 ### No-code automation

 [  Zapier ](https://scrapfly.io/integration/zapier) [  Make ](https://scrapfly.io/integration/make) [  n8n ](https://scrapfly.io/integration/n8n) 

 

### LLM &amp; RAG frameworks

 [  LlamaIndex ](https://scrapfly.io/integration/llamaindex) [  LangChain ](https://scrapfly.io/integration/langchain) [  CrewAI ](https://scrapfly.io/integration/crewai) 

 

### First-class SDKs

 [  Python pip install scrapfly-sdk ](https://scrapfly.io/docs/sdk/python) [  TypeScript Node, Deno, Bun ](https://scrapfly.io/docs/sdk/typescript) [  Go go get scrapfly-sdk ](https://scrapfly.io/docs/sdk/golang) [  Rust cargo add scrapfly-sdk ](https://scrapfly.io/docs/sdk/rust) [  Scrapy Full-feature extension ](https://scrapfly.io/docs/sdk/scrapy) 

 

 

 [ See all integrations  ](https://scrapfly.io/integration) 

 

---

  FAQ## Frequently Asked Questions

 

  ### What is Antibot Detector?

 Antibot Detector is a free Chrome extension and REST API tool that identifies which antibot system protects a given website. It analyzes response headers, cookie names, JavaScript challenge patterns, and TLS fingerprints to name the exact vendor - Cloudflare, DataDome, Akamai, PerimeterX, Kasada, Imperva, F5, AWS WAF, and 18+ more.

 

   ### Does it collect or send any of my browsing data?

 No. All detection runs locally inside your browser. The extension does not send browsing history, URLs, or analysis results to any external server. You can verify this claim by reading the open source code on [GitHub]("https://github.com/scrapfly/Antibot-Detector").

 

   ### Is it really free?

 Yes. The Chrome extension is free for personal and non-profit use under the NPOSL-3.0 license. The REST API endpoint is available to Scrapfly users at no extra cost. Commercial redistribution of the extension requires a separate license.

 

   ### How does it detect Cloudflare vs DataDome vs others?

 Each antibot vendor leaves characteristic traces. Cloudflare sets a `cf-ray` header and a `cf_clearance` cookie. DataDome injects a `datadome` cookie and a specific interstitial redirect. Akamai's sensor data collection script has a distinctive URL pattern. Kasada sends a `x-kasada-pow` challenge header. The detector checks all known signals in parallel and returns a confidence-scored match.

 

   ### Can I use the detector in a CI pipeline or automated script?

 Yes. The REST endpoint `GET /api/antibot-detector?url=<target>&key=<your-key>` returns a JSON object with the detected vendor, confidence, and matched signals. You can call it from Python, shell, TypeScript, or any HTTP client before building a scraper for a new target.

 

   ### What should I do after I detect an antibot?

 Use Scrapfly's [Web Scraping API](https://scrapfly.io/"/products/web-scraping-api") with `asp=true`. The same engine that powers the detector also tells Scrapfly's ASP which bypass strategy to apply - the correct TLS fingerprint, proxy pool, and challenge-handling path for that specific vendor.

 

   ### Does it slow down my browser?

 No noticeable impact. The extension uses a 12-hour domain cache and an LRU pattern cache that reduces redundant work by 60-80%. Detection runs asynchronously after page load, never blocking rendering.

 

  

 

  ---

### Free tool. Real bypass power one step away.

 Antibot Detector identifies what you are up against - always free. When you are ready to bypass it, Scrapfly's [Web Scraping API](https://scrapfly.io/products/web-scraping-api) handles Cloudflare, DataDome, Akamai, PerimeterX, and more with a single `asp=true` flag. Start with 1,000 free credits, no credit card required.

 

 [Get Free API Key](https://scrapfly.io/register)1,000 free credits. No card.

 

 

 

### Once you know the antibot, bypass it.

 Antibot Detector tells you what you are up against. The Scrapfly stack handles the rest: [Web Scraping API](https://scrapfly.io/products/web-scraping-api) for one-call bypass, [Unblocker](https://scrapfly.io/products/unblocker) when you just need `asp=true`, [Browser API](https://scrapfly.io/products/cloud-browser-api) for JS-heavy targets, [Scrapium](https://scrapfly.io/scrapium) for stealth Chromium you drive directly, [Curlium](https://scrapfly.io/curlium) for byte-perfect HTTP, [Extraction API](https://scrapfly.io/products/extraction-api) for structured output. See the full [bypass catalog](https://scrapfly.io/bypass) with per-vendor guides for [Cloudflare](https://scrapfly.io/bypass/cloudflare), [DataDome](https://scrapfly.io/bypass/datadome), [Akamai](https://scrapfly.io/bypass/akamai), and more.

 

 [ Explore the full stack ](https://scrapfly.io/products/web-scraping-api)