Bypass DataDome
96% success on DataDome-protected targets. One API parameter.
ML score below threshold. datadome cookie valid. Page cleared.
- Real browser fingerprints. TLS, HTTP/2, Canvas, and WebGL all match real Chrome on every request.
- Adaptive challenge solving. JavaScript execution, CAPTCHAs, proof-of-work — handled without solver keys.
- Session intelligence. Unblocked browser sessions reused across requests so DataDome sees a single coherent visitor.
One API parameter. Add asp=True. See the ASP docs.
Every DataDome Signal, Matched
Bit-for-bit fingerprint parity with real Chrome. Not monkey-patching, not surface-level — engineered at the protocol and C++ layers.
DataDome ML Score — Always Clean.
DataDome scores every request in real time with a machine-learning model fed by TLS fingerprint, browser fingerprint, behavioral signals, and historical IP reputation. A single anomaly tips the score, triggers the slider CAPTCHA or drops the datadome cookie from valid to blocked. Scrapfly aligns every signal so the score stays below the challenge threshold.
TLS + HTTP/2 — First Signal Seen
DataDome scores the TLS ClientHello before any payload is read. A non-Chrome JA4 hash alone is enough to escalate to slider CAPTCHA. Curlium produces byte-perfect Chrome TLS + HTTP/2 SETTINGS.
Why DIY Bypasses Fail
DataDome's ML model retrains continuously. Static bypass libraries have no chance.
| curl-impersonate | blocked at TLS |
| datadome-bypass (OSS) | deprecated |
| undetected-chromedriver | ML score spikes |
| Playwright + stealth | fingerprint leaks |
| Scrapfly | 96%, tracked daily |
Per-Request Scoring
DataDome re-evaluates every request. Session reuse + stable JA4 keeps each request under threshold.
E-commerce Heavy
DataDome is the go-to for e-commerce, ticketing, travel. Protects pages AND APIs.
- Page route
- API route
- Mobile SDK
DataDome Detection Stack — Every Layer Matched
DataDome chains TLS → collector JS → fingerprint payload → ML inference → datadome cookie. Every layer scored, every session tracked.
Slider CAPTCHA — Handled Inline
When the ML score escalates, DataDome serves a slider puzzle. Scrapium solves it inline without a third-party solver.
- Slider puzzle solved inline
- Device-check interstitial cleared
- API-layer block path handled
- No solver keys, no vendor accounts
One Parameter. DataDome Cleared.
Add asp=True. Scrapfly detects DataDome and routes through the correct engine — Curlium for HTTP-layer targets, Scrapium for JS-heavy ones.
Set asp=True and Scrapfly handles DataDome automatically. Picks Curlium or Scrapium per target.
from scrapfly import ScrapeConfig, ScrapflyClient, ScrapeApiResponse
client = ScrapflyClient(key="API KEY")
api_response: ScrapeApiResponse = client.scrape(
ScrapeConfig(
url='https://httpbin.dev/html',
# bypass anti-scraping protection
asp=True
)
)
print(api_response.result)
import {
ScrapflyClient, ScrapeConfig
} from 'jsr:@scrapfly/scrapfly-sdk';
const client = new ScrapflyClient({ key: "API KEY" });
let api_result = await client.scrape(
new ScrapeConfig({
url: 'https://httpbin.dev/html',
// bypass anti-scraping protection
asp: true,
})
);
console.log(api_result.result);
package main
import (
"fmt"
"github.com/scrapfly/go-scrapfly"
)
func main() {
client, _ := scrapfly.New("API KEY")
result, _ := client.Scrape(&scrapfly.ScrapeConfig{
URL: "https://httpbin.dev/html",
// bypass anti-scraping protection
ASP: true,
})
fmt.Println(result.Result.Content)
}
use scrapfly_sdk::{Client, ScrapeConfig};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = Client::builder().api_key("API KEY").build()?;
let cfg = ScrapeConfig::builder("https://httpbin.dev/html")
// bypass anti-scraping protection
.asp(true)
.build()?;
let result = client.scrape(&cfg).await?;
println!("{}", result.result.content);
Ok(())
}
http https://api.scrapfly.io/scrape \
key==$SCRAPFLY_KEY \
url==https://httpbin.dev/html \
asp==true
{
"result": {
"status_code": 200,
"success": true,
"url": "https://httpbin.dev/html",
"content": "<!DOCTYPE html><html>...</html>",
"content_format": "raw",
"content_type": "text/html; charset=utf-8",
"response_headers": {
"content-type": "text/html; charset=utf-8",
"server": "cloudflare"
},
"cookies": [],
"duration": 1842,
"log_url": "https://scrapfly.io/dashboard/monitoring/log/01J...",
"asp_cost": 30
},
"context": {
"asp": true,
"proxy": {
"country": "us",
"type": "datacenter"
}
}
}
Frequently Asked Questions
Can I test on my specific DataDome targets?
Yes. The free plan includes 1,000 API credits with no credit card required. Enable asp=True and test your exact targets before committing. Scrapfly achieves 96% success on DataDome-protected sites; failed requests are not charged.
How much does ASP cost?
ASP starts at 30+ credits per request, scaling with target complexity. You pay for what a specific target needs, not a flat premium. See pricing.
What makes DataDome hard to bypass?
DataDome scores requests in real time with a machine-learning model fed by browser fingerprint, behavioral signals, and historical IP reputation. A single anomaly tips the score and triggers the slider CAPTCHA or a hard block. Scrapfly aligns every signal — TLS, Canvas, WebGL, Audio, timing — so the score stays clean. DIY: bypassing DataDome.
Where is DataDome deployed?
DataDome is common on e-commerce, ticketing, and travel sites. It protects both pages and APIs, so naive HTTP scraping hits the block even when the JS challenge is avoided. Scrapfly handles both code paths.
Bypass every other major anti-bot vendor too.
ASP handles every major anti-bot stack with the same flag. Switch targets, keep the parameter.