     [Blog](https://scrapfly.io/blog)   /  [blocking](https://scrapfly.io/blog/tag/blocking)   /  [How to Bypass Datadome Anti Scraping in 2026](https://scrapfly.io/blog/posts/how-to-bypass-datadome-anti-scraping)   # How to Bypass Datadome Anti Scraping in 2026

 by [Bernardas Alisauskas](https://scrapfly.io/blog/author/bernardas) Apr 18, 2026 20 min read [\#blocking](https://scrapfly.io/blog/tag/blocking) 

 [  ](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-bypass-datadome-anti-scraping "Share on LinkedIn")    

 

 

   

Datadome is one of the most advanced anti-bot services protecting European websites like [How to Scrape Leboncoin.fr with Python (No API Needed)](https://scrapfly.io/blog/posts/how-to-scrape-leboncoin-marketplace-real-estate), Vinted, and Deezer. Unlike simpler firewalls, Datadome uses per-customer ML models trained on each website's unique traffic patterns. That means every Datadome-protected website is a different challenge for web scrapers.

In this guide, we'll cover how Datadome detects web scrapers through TLS fingerprinting, IP analysis, JavaScript challenges, and behavioral ML models. We'll also walk through practical bypass tools for 2026, including Nodriver, SeleniumBase UC Mode, and Camoufox. Let's get started.

[How to Bypass Anti-Bot Protection When Web ScrapingLearn how anti-bot systems detect scrapers and 5 universal bypass techniques including proxy rotation, fingerprinting, and fortified headless browsers.](https://scrapfly.io/blog/posts/how-to-bypass-anti-bot-protection-when-web-scraping)

## Key Takeaways

Bypassing Datadome in 2026 requires a combination of fingerprint management, behavioral simulation, and proxy rotation. Datadome now runs over 85,000 customer-specific ML models, making each protected website a unique challenge.

- Use headless browsers like Nodriver, SeleniumBase UC Mode, or Camoufox to handle JavaScript challenges and fingerprinting
- Rotate high-quality residential or mobile proxies to avoid IP-based blocking and rate limiting
- Match TLS and HTTP fingerprints to real browsers using tools resistant to JA3 fingerprinting
- Simulate natural browsing behavior with warm-up navigation, realistic timing, and mouse movements
- Combine multiple techniques since Datadome's per-customer ML models learn from each site's unique traffic patterns
- Use Scrapfly's anti-scraping protection for automated Datadome bypass at scale

**Get web scraping tips in your inbox**Trusted by 100K+ developers and 30K+ enterprises. Unsubscribe anytime.





## What is Datadome?

Datadome is a paid WAF service that protects websites from automated requests. In the context of security, Datadome blocks malicious bots and scripts that cause issues like DDoS attacks and fraud.

In the context of web scraping, Datadome protects the public data on websites. Datadome is particularly popular with European websites. When Datadome blocks a connection, the user sees a specific error page.

## Datadome Block Page Examples

Most Datadome bot blocks result in HTTP status codes 400-500, with 403 being the most common. The error message can appear in different forms, but Datadome usually requests the visitor to turn on JavaScript or solve a [5 Proven Ways to Bypass CAPTCHA in Python](https://scrapfly.io/blog/posts/how-to-bypass-captcha-while-web-scraping).



Datadome block page on Leboncoin websiteDatadome errors usually appear on the first request to the website. However, Datadome also uses AI behavior analysis, which can trigger blocks after a few successful requests. Understanding how Datadome detects scrapers is the first step toward building a reliable bypass.

## How does Datadome Detect Web Scrapers?

To identify web scrapers, Datadome employs various techniques to estimate whether the connecting client is a bot or a real user.



Datadome takes into consideration all the connection metrics, like encryption type (TLS), HTTP protocol used, and JavaScript engine to calculate a trust score.

Based on the final trust score, Datadome either lets the user in, blocks the connection, or requests a CAPTCHA challenge.



The trust score evaluation runs in real-time, making web scraping difficult as many factors can influence the final score. However, by understanding each step of the detection process, you can build a Datadome bypass with a high success rate. Let's take a look at each step.

### TLS Fingerprinting

TLS (or SSL) is the first step in the HTTP connection. When using encrypted connections, like HTTPS instead of HTTP, both the server and client have to negotiate the encryption methods. With the availability of various encryption methods and ciphers, the negotiation process can reveal information about the client.

People generally call this negotiation process **JA3 fingerprinting**. Different operating systems, web browsers, or programming libraries perform the TLS encryption handshake uniquely, which results in different JA3 fingerprints.

So, using a web scraping tool that's resistant to JA3 fingerprinting is important for avoiding Datadome CAPTCHA. You can use Scrapfly's [JA3 fingerprint web tool](https://scrapfly.io/web-scraping-tools/ja3-fingerprint) to validate the request's JA3 fingerprint.

For further details on TLS fingerprinting, refer to our dedicated guide.

### IP Address Fingerprinting

The next step of Datadome's trust score calculation is the IP address analysis. Datadome has access to many different IP databases, which are used to look up the connecting client's IP address. This IP address lookup identifies the client's location, ISP, reputation, and other related information.

The most important metric used here is the IP address type, as there are three different types of IP addresses:

- **Residential** are home addresses assigned by internet providers to home networks. Residential IP addresses provide a **positive trust score** because real users primarily browse from residential connections and these addresses are expensive to acquire in bulk.
- **Mobile** addresses are assigned by mobile phone towers to mobile users. Mobile IPs also provide a **positive trust score** because mobile towers share and recycle IP addresses across many users, making individual tracking much harder.
- **Datacenter** addresses are assigned to various data centers and server platforms like Amazon's AWS, Google Cloud, etc. Datacenter IPs provide a **negative trust score** because very few real users browse the web from datacenter networks.

Using IP analysis, Datadome can roughly estimate how likely the connecting client is a human or a bot. Datadome can also block a client if the requesting rate is too high in a short time window.

So, **rotate high-quality residential or mobile IP addresses** to [hide your IP address](https://scrapfly.io/blog/posts/how-to-hide-your-ip-address-while-scraping) and bypass Datadome while scraping.

[How to Avoid Web Scraper IP Blocking?How IP addresses are used in web scraping blocking. Understanding IP metadata and fingerprinting techniques to avoid web scraper blocks.](https://scrapfly.io/blog/posts/how-to-avoid-web-scraping-blocking-ip-addresses)

### HTTP Details

The next area where an anti bot system like Datadome looks is the HTTP details. The HTTP protocol is becoming increasingly complex, making it easier to identify connections from web scrapers.

Most of the web operates through HTTP2 or HTTP3, while most web scraping libraries still use HTTP1.1. So, if a connecting client uses HTTP1.1, Datadome flags that connection as likely bot traffic. That being said, many modern libraries like Python's [How to Web Scrape with HTTPX and Python](https://scrapfly.io/blog/posts/web-scraping-with-python-httpx) and [How to Use cURL For Web Scraping](https://scrapfly.io/blog/posts/how-to-use-curl-for-web-scraping) support HTTP2, but HTTP2 is not enabled by default.

HTTP2 is also susceptible to HTTP2 fingerprinting, which Datadome uses to identify web scrapers. See Scrapfly's [HTTP2 fingerprint test page](https://scrapfly.io/web-scraping-tools/http2-fingerprint) for more info.

Then, request headers and header order play an important role in identifying web scrapers. Since most web browsers have strict header value and order rules, any mismatch like missing `Origin` or [How to Effectively Use User Agents for Web Scraping](https://scrapfly.io/blog/posts/user-agent-header-in-web-scraping) header can leak the fact that the request sender is a bot.

Also, the default HTTP details of HTTP clients and browser automation tools can leak their usage, such as the default User-Agent of each client. Overriding these values using web scraping tools that hide their traces, such as [Undetected ChromeDriver](https://scrapfly.io/blog/posts/web-scraping-without-blocking-using-undetected-chromedriver) and [Curl Impersonate](https://scrapfly.io/blog/posts/curl-impersonate-scrape-chrome-firefox-tls-http2-fingerprint), can help mimic the HTTP details of human users.

So, **make sure to use HTTP2 and match header values and the order of a real web browser** to increase the chances of bypassing a Datadome protected website.

For further details, refer to our dedicated guide on web scraping headers.

### JavaScript Fingerprinting

The most complex and challenging step to address is JavaScript fingerprinting. Datadome uses the client's JavaScript engine to fingerprint the client machine for details like:

- Javascript runtime information.
- Hardware and operating system details.
- Web browser information and capabilities.

The full fingerprint data set feeds into the trust score calculation process. Fortunately for scrapers, JavaScript fingerprinting takes time to execute and is prone to false positives. In other words, JavaScript fingerprinting is not as heavily weighted as the other processes.

#### How to Bypass JS Fingerprinting

There are two ways to bypass DataDome CAPTCHA from JavaScript fingerprinting.

The first approach is to inspect and reverse engineer all of the JavaScript code Datadome uses to fingerprint the client. Reverse engineering is a very time-consuming process and requires a lot of reverse-engineering knowledge. Datadome is also constantly updating its fingerprinting logic, which means constant maintenance.

A more practical approach is to use an automated browser for web scraping. There are different automating libraries for browser automation, such as [Web Scraping with Selenium and Python](https://scrapfly.io/blog/posts/web-scraping-with-selenium-and-python#what-is-selenium), [How to Web Scrape with Puppeteer and NodeJS in 2026](https://scrapfly.io/blog/posts/web-scraping-with-puppeteer-and-nodejs#puppeteer-overview), and [Web Scraping with Playwright and Python](https://scrapfly.io/blog/posts/web-scraping-with-playwright-and-python#what-is-playwright).

So, **introducing browser automation using tools like Selenium, Puppeteer or Playwright is the best way to bypass Datadome JavaScript fingerprinting.**

Many advanced scraping tools can combine browser and HTTP scraping capabilities for best performance. These tools use resource-heavy browsers to establish a trust score and continue scraping using fast HTTP clients like [How to Web Scrape with HTTPX and Python](https://scrapfly.io/blog/posts/web-scraping-with-python-httpx) in Python. Scrapfly's [session feature](https://scrapfly.io/docs/scrape-api/session) also supports this approach.

### Behavior Analysis

Datadome uses machine learning algorithms to analyze connection patterns and user profiles. So, even after passing all the above fingerprint checks, Datadome can still block the client if the system detects suspicious behavior.

Behavioral analysis means the trust score is not a static number but is constantly being adjusted based on the client's actions. Making the scraper mimic human behavior can lead to a higher trust score.

So, **distribute web scraper traffic through multiple different agents** using proxies and different fingerprinting configurations to bypass Datadome. For example, when scraping using browser automation tools, use different browser profiles like screen size, operating systems, and rendering capabilities.

### Per-Customer ML Models

Beyond generic behavior analysis, Datadome's most powerful detection layer is its per-customer ML model system. Datadome operates over **85,000 customer-specific and use-case-specific models** trained on each website's unique traffic patterns.

These per-customer models mean that a bypass technique working on one Datadome-protected website may fail completely on another. Each model learns the normal behavior patterns for its specific website, including typical navigation flows, session durations, and interaction patterns. The models process over 5 trillion signals per day and respond in under 2 milliseconds.

#### Intent-Based and LLM Detection

In 2025, Datadome introduced **intent-based detection**. The system doesn't ask "is this a bot?" but instead analyzes what the visitor is trying to accomplish. Even a scraper with a perfect browser fingerprint can be flagged if the navigation pattern suggests automated data collection rather than genuine browsing.

Datadome also added **LLM crawler detection** in 2025 to categorize AI agent traffic. LLM crawler traffic quadrupled across Datadome's customer base during 2025, rising from 2.6% of verified bot traffic in January to over 10% by August.

#### What This Means for Scrapers

The shift from static fingerprinting to behavioral ML has a few key implications for scrapers:

- Browser fingerprint spoofing alone is not enough. Behavioral signals carry as much weight as technical fingerprints
- No universal bypass exists. Each protected site is effectively a different challenge
- Session behavior matters more than session setup. How you browse (timing, navigation patterns, mouse movements) is as important as your TLS or HTTP fingerprint

With the detection techniques covered, let's look at the tools and strategies available for bypassing Datadome in 2026.

## How to Bypass Datadome Anti Bot?

Now that we've covered all methods Datadome uses to detect web scrapers, let's look at existing tools we can use to bypass Datadome protection.

While bypassing Datadome at scale requires a lot of technical effort, several open-source tools can provide a fair amount of success. The best approach combines multiple techniques from the list below.

### Start with Headless Browsers

Datadome uses JavaScript fingerprinting and challenges to detect web scrapers. Reverse engineering these challenges is tough and requires a lot of time and knowledge. Headless browsers can help bypass these challenges automatically.

[Scraping using headless browsers](https://scrapfly.io/blog/posts/scraping-using-browsers) is a common web scraping technique that uses tools like Selenium, Puppeteer, or Playwright to automate a real browser without GUI elements.

Headless browsers execute JavaScript challenges and Datadome's fingerprinting code natively, which can bypass the anti-bot system. Using a headless browser saves a lot of time compared to manually reverse-engineering Datadome's JavaScript.

### Use High Quality Residential Proxies

Datadome uses IP address analysis to determine the trust score. Using high-quality residential or mobile proxies can help bypass the IP address fingerprinting.

Residential proxies are real IP addresses assigned by internet providers to individuals, making the connections look like real users.

[The Complete Guide To Using Proxies For Web ScrapingIntroduction to proxy usage in web scraping. What types of proxies are there? How to evaluate proxy providers and avoid common issues.](https://scrapfly.io/blog/posts/introduction-to-proxies-in-web-scraping)

Web scraping APIs like Scrapfly already use high quality proxies by default as that's often the best way to bypass anti-scraping protection at scale.

### Try Nodriver

[Nodriver](https://github.com/ultrafunkamsterdam/nodriver) is a Python browser automation library created by the same developer behind `undetected-chromedriver`. Nodriver is the recommended successor to `undetected-chromedriver` and takes a fundamentally different approach to anti-bot bypass.

Instead of patching Selenium's WebDriver to hide automation signals, Nodriver eliminates WebDriver entirely. Nodriver communicates directly with Chrome using the Chrome DevTools Protocol (CDP). Since there is no chromedriver binary or Selenium involved, the browser doesn't expose `navigator.webdriver`, chromedriver ports, or other WebDriver-related detection vectors.

Nodriver is fully asynchronous (built on Python's `asyncio`), which provides performance benefits and better control over concurrent browser operations.

#### Nodriver Example

First, install Nodriver:

bash```bash
pip install nodriver
```



Then run the following script to visit Scrapfly's browser fingerprint tool, which shows how the browser appears to anti-bot systems like Datadome:

python```python
import nodriver as uc

async def main():
    browser = await uc.start()
    page = await browser.get("https://scrapfly.io/web-scraping-tools/browser-fingerprint")

    # wait for the page to fully load and any challenges to complete
    await page.sleep(5)

    # save screenshot to verify fingerprint obfuscation
    await page.save_screenshot("fingerprint.png", format="png")

    # get page content
    content = await page.get_content()
    print(content[:500])

    browser.stop()

if __name__ == "__main__":
    uc.loop().run_until_complete(main())
```



The script starts a Chrome browser using Nodriver's CDP connection and navigates to the fingerprint tool. The screenshot shows what fingerprint data Datadome sees from the browser. Since Nodriver uses CDP directly, the browser appears identical to a manually operated Chrome instance.

For DataDome specifically, Nodriver achieves roughly a 25% baseline success rate without proxies. Combining Nodriver with residential proxies and warm-up navigation improves bypass rates.

[Web Scraping Without Blocking With Undetected ChromeDriverIn this tutorial we'll be taking a look at a new popular web scraping tool Undetected ChromeDriver which is a Selenium extension that allows to bypass many scraper blocking techniques.](https://scrapfly.io/blog/posts/web-scraping-without-blocking-using-undetected-chromedriver)

### Try SeleniumBase UC Mode

[SeleniumBase](https://scrapfly.io/blog/posts/guide-to-seleniumbase-better-selenium) includes a built-in UC Mode (Undetected Chrome Mode) designed for bypassing anti-bot systems. UC Mode strategically disconnects WebDriver from the browser before loading protected pages. While the page loads and anti-bot checks run, the browser appears like a normal, human-operated browser. After the checks pass, WebDriver reconnects for automated control.

For advanced anti-bot systems like Datadome, SeleniumBase offers **CDP Mode** which goes a step further. CDP Mode uses the Chrome DevTools Protocol directly, bypassing WebDriver entirely during interactions with the page.

#### SeleniumBase CDP Mode Example

First, install SeleniumBase:

bash```bash
pip install seleniumbase
```



Then run the following script to visit Scrapfly's browser fingerprint tool and save a screenshot:

python```python
from seleniumbase import SB

with SB(uc=True, test=True, locale="en") as sb:
    # activate CDP mode for Datadome-protected sites
    sb.activate_cdp_mode("https://scrapfly.io/web-scraping-tools/browser-fingerprint")
    sb.sleep(5)

    # save screenshot to verify fingerprint obfuscation
    sb.save_screenshot("fingerprint.png")

    # evaluate JavaScript to get the page title
    title = sb.cdp.evaluate("document.title")
    print(title)
```



The script uses SeleniumBase's CDP Mode to load a page with WebDriver disconnected during anti-bot checks. The `activate_cdp_mode` method handles the disconnect/reconnect cycle automatically. The screenshot shows what fingerprint data Datadome sees.

SeleniumBase also provides built-in CAPTCHA handling through `uc_gui_click_captcha()`, which uses PyAutoGUI to click CAPTCHA checkboxes programmatically. For best results with Datadome, run SeleniumBase in non-headless (GUI) mode and pair the scraper with residential proxies.

### Try Camoufox

[Camoufox](https://github.com/daijro/camoufox) is an open-source anti-detect browser built on a custom, stripped-down Firefox build. What sets Camoufox apart from other stealth tools is that Camoufox spoofs browser fingerprints at the **C++ implementation level** inside Firefox's engine. JavaScript inspection techniques like `Object.getOwnPropertyDescriptor` cannot detect the spoofing because the values are returned natively by the C++ layer, not injected via JavaScript.

Camoufox integrates with BrowserForge to auto-generate realistic fingerprints that match real-world device distributions. Camoufox also provides built-in WebRTC IP spoofing, automatic geolocation detection from proxy IPs, and humanized cursor movements.

#### Camoufox Example

First, install Camoufox with GeoIP support and download the browser binary:

bash```bash
pip install "camoufox[geoip]"
python -m camoufox fetch
```



Then run the following script to visit Scrapfly's browser fingerprint tool and save a screenshot:

python```python
from camoufox.sync_api import Camoufox

with Camoufox(
    headless=False,     # use False for better Datadome bypass success
    humanize=True,      # human-like cursor movements
    os="windows",       # generate Windows fingerprints
    geoip=True,         # auto-spoof location from proxy IP
) as browser:
    page = browser.new_page()
    page.goto("https://scrapfly.io/web-scraping-tools/browser-fingerprint")
    page.wait_for_timeout(5000)

    # save screenshot to verify fingerprint obfuscation
    page.screenshot(path="fingerprint.png")

    # get page content
    content = page.content()
    print(content[:500])
```



The script launches a Camoufox browser with humanized mouse movements and Windows fingerprint generation. The `geoip=True` flag automatically matches timezone and locale to the proxy IP address. The screenshot shows what fingerprint data Datadome sees from the Firefox-based browser.

One caveat: Camoufox's original maintainer has been unavailable since March 2025, and the Firefox base version has fallen behind. Community forks exist, but check the project's current status before relying on Camoufox for production scraping.

### Try curl-impersonate

[Use Curl Impersonate to scrape as Chrome or Firefox](https://scrapfly.io/blog/posts/curl-impersonate-scrape-chrome-firefox-tls-http2-fingerprint) is an HTTP client tool that extends the popular `libcurl` HTTP client to mimic the behavior of a real web browser. curl-impersonate patches the TLS, HTTP, and Javascript fingerprints to make the outgoing HTTP requests look like they're coming from a real web browser.

However, curl-impersonate only works with curl-powered web scrapers, which can be difficult to use compared to modern HTTP libraries like `fetch` or `requests`. For more on curl use in scraping see [how to scrape with curl](https://scrapfly.io/blog/posts/how-to-use-curl-for-web-scraping).

### Try Warming Up Scrapers

To bypass behavior analysis, adjusting scraper behavior to appear more natural can improve Datadome trust scores.

In real life, most human users don't visit product URLs directly. Real users explore websites in steps:

- Start at the homepage
- Browse product categories or use search
- View individual product pages

Prefixing scraping logic with this warm-up behavior makes the scraper appear more natural and helps avoid behavioral analysis detection.

### Rotate Real User Fingerprints

For sustained web scraping and Datadome bypass in 2026, headless browsers should always be configured with different, realistic fingerprint profiles:

- Screen resolution
- Operating system
- Browser type
- Installed browser extensions and plugins

All of these features play an important role in Datadome's trust score calculation.

Each headless browser library can be configured to use different resolution and rendering capabilities. Distributing scraping through multiple real-looking browser configurations can prevent Datadome from detecting the scraper.

For more, see Scrapfly's [browser fingerprint tool](https://scrapfly.io/web-scraping-tools/browser-fingerprint) to check how your browser looks to Datadome. This tool can be used to collect different browser fingerprints from real web browsers for use in scraping.

### Keep an Eye on New Tools

Open source web scraping is tough because each new technique is quickly patched by anti-bot services like Datadome. These quick patches create a constant cat-and-mouse game.

For best results, track web scraping news sources and popular GitHub repository changes to stay ahead:

- [Scrapfly Blog](https://scrapfly.io/blog/) for latest web scraping news and tutorials.
- GitHub issue and network pages for tools like curl-impersonate and Nodriver often contain new bypass techniques and patches that are not available on the `main` branch.

If all that seems like too much work, let Scrapfly handle Datadome bypass for you.

## Troubleshooting Common Issues

When scraping Datadome-protected websites, you'll encounter common errors. The table below maps each error to its likely cause and the recommended solution:

| Error | Likely Cause | Solution |
|---|---|---|
| 403 Forbidden on first request | IP address flagged or datacenter IP detected | Switch to residential or mobile proxies |
| CAPTCHA loops (repeated challenges) | Browser fingerprint detected as automated | Use Nodriver, SeleniumBase CDP Mode, or Camoufox |
| JavaScript challenge timeout | Wait time too short for challenge completion | Increase wait time to 10-15 seconds |
| Blocked after a few successful requests | Behavioral analysis flagged unnatural patterns | Add warm-up navigation and randomize request timing |
| 429 Too Many Requests | Request rate too high from a single IP | Distribute requests across more proxy IPs and add delays |
| Different results than browser shows | TLS/HTTP fingerprint mismatch | Use curl-impersonate or a headless browser with HTTP2 |

If a specific technique stops working, Datadome likely patched the detection gap. Try combining multiple bypass methods or switch to a managed service like Scrapfly that maintains bypass capabilities automatically.

[How to Fix 403 Forbidden Errors When Web ScrapingLearn why web scrapers get 403 Forbidden errors and how to fix them with 7 Python solutions, from headers to TLS fingerprinting.](https://scrapfly.io/blog/posts/403-forbidden-web-scraping)

## Bypass Datadome with ScrapFly

Bypassing Datadome anti-bot is possible but tough at scale. Let Scrapfly do the heavy lifting for you.



Scrapfly employs a team of full-time engineers to maintain anti-bot bypass systems so you don't have to. Scrapfly handles proxy rotation, fingerprint management, and JavaScript challenge solving automatically.

Learn more about [Web Scraping API](https://scrapfly.io/web-scraping-api) and how it works.

For example, to scrape pages protected by Datadome or any other anti-scraping service, when using [Scrapfly SDK](https://scrapfly.io/docs/sdk/python) all you need to do is turn on the [Anti Scraping Protection bypass](https://scrapfly.io/docs/scrape-api/anti-scraping-protection) feature:

First, install the Scrapfly SDK:

bash```bash
pip install scrapfly-sdk
```



Then scrape any Datadome-protected page with anti-scraping protection enabled:

python```python
from scrapfly import ScrapflyClient, ScrapeConfig

scrapfly = ScrapflyClient(key="YOUR_SCRAPFLY_API_KEY")
result = scrapfly.scrape(ScrapeConfig(
    url="https://web-scraping.dev/products",
    # turn on anti-scraping protection bypass (works for Datadome, Cloudflare, etc.)
    asp=True,
    # use headless browsers to render JavaScript powered pages
    render_js=True,
    # use residential proxies for higher trust scores
    proxy_pool="public_residential_pool",
))
print(result.scrape_result)
```



The code above uses Scrapfly's Python SDK with anti-scraping protection (`asp=True`) enabled. Scrapfly handles all fingerprint management, proxy rotation, and challenge solving behind the scenes.



## FAQ

Is it legal to scrape Datadome protected pages?Yes. Web scraping publicly available data is legal around the world as long as the scrapers do not cause damage to the website.







Is it possible to bypass Datadome using cache services?Public page caching services like Google Cache or Archive.org can sometimes access Datadome-protected pages because Google and Archive tend to be whitelisted. However, not all pages are cached, and cached versions are often outdated or missing dynamically loaded content.







Can I bypass Datadome with Python requests?Plain Python `requests` library cannot bypass Datadome because `requests` doesn't execute JavaScript, uses HTTP1.1 by default, and has a recognizable TLS fingerprint. For HTTP-only scraping, try `curl_cffi` (a Python wrapper around curl-impersonate) which mimics real browser TLS and HTTP2 fingerprints.







How long does Datadome take to detect scrapers?Datadome processes detection signals in under 2 milliseconds. The initial check happens on the first request, but Datadome's behavioral analysis continues monitoring throughout the entire session. A scraper can pass the first request and get blocked on the tenth if behavior patterns trigger the ML models.







Does Datadome block residential proxies?Residential proxies alone don't guarantee a bypass. Datadome considers IP type as one signal among many. A residential proxy with suspicious TLS fingerprints, missing JavaScript execution, or unnatural browsing behavior will still get blocked. Residential proxies improve success rates when combined with proper browser automation and fingerprint management.







Is it possible to bypass Datadome entirely and scrape the website directly?Full bypass is more of an internet security problem. Bypassing Datadome entirely would require exploiting a vulnerability, which can be illegal in some countries and is very difficult either way.







What are some other anti-bot services?There are many other anti-bot WAF services like [How to Bypass Cloudflare When Web Scraping in 2026](https://scrapfly.io/blog/posts/how-to-bypass-cloudflare-anti-scraping#what-is-cloudflare-bot-management), [How to Bypass Akamai when Web Scraping in 2026](https://scrapfly.io/blog/posts/how-to-bypass-akamai-anti-scraping#what-is-akamai-bot-manager), [Imperva Incapsula](https://scrapfly.io/blog/posts/how-to-bypass-imperva-incapsula-anti-scraping#what-is-imperva-aka-incapsula), [How to Bypass PerimeterX when Web Scraping in 2026](https://scrapfly.io/blog/posts/how-to-bypass-perimeterx-human-anti-scraping#what-is-perimeterx) and [How to Bypass Kasada Anti-Bot When Web Scraping in 2026](https://scrapfly.io/blog/posts/how-to-bypass-kasada-anti-scraping-waf#what-is-kasada). These services function similarly to Datadome, so the techniques in this guide apply to them as well.









## Summary

In this guide, we covered how Datadome detects and blocks web scrapers using a multi-layered detection system. Datadome combines TLS fingerprinting, IP address analysis, HTTP details, JavaScript fingerprinting, behavior analysis, and per-customer ML models to calculate a trust score for every connecting client.

For bypassing Datadome in 2026, the most effective open-source tools are Nodriver (direct CDP communication with no WebDriver footprint), SeleniumBase CDP Mode (strategic WebDriver disconnect/reconnect), and Camoufox (C++ level fingerprint spoofing in Firefox). Each tool addresses a different part of the detection puzzle, and combining any of these with residential proxies and warm-up navigation gives the best results.

For production-scale scraping of Datadome-protected websites, Scrapfly provides an automated bypass solution that handles all these challenges behind a single API call.

Legal Disclaimer and PrecautionsThis tutorial covers popular web scraping techniques for education. Interacting with public servers requires diligence and respect:

- Do not scrape at rates that could damage the website.
- Do not scrape data that's not available publicly.
- Do not store PII of EU citizens protected by GDPR.
- Do not repurpose *entire* public datasets which can be illegal in some countries.

Scrapfly does not offer legal advice but these are good general rules to follow. For more you should consult a lawyer.



 

    Table of Contents- [Key Takeaways](#key-takeaways)
- [What is Datadome?](#what-is-datadome)
- [Datadome Block Page Examples](#datadome-block-page-examples)
- [How does Datadome Detect Web Scrapers?](#how-does-datadome-detect-web-scrapers)
- [TLS Fingerprinting](#tls-fingerprinting)
- [IP Address Fingerprinting](#ip-address-fingerprinting)
- [HTTP Details](#http-details)
- [JavaScript Fingerprinting](#javascript-fingerprinting)
- [Behavior Analysis](#behavior-analysis)
- [Per-Customer ML Models](#per-customer-ml-models)
- [How to Bypass Datadome Anti Bot?](#how-to-bypass-datadome-anti-bot)
- [Start with Headless Browsers](#start-with-headless-browsers)
- [Use High Quality Residential Proxies](#use-high-quality-residential-proxies)
- [Troubleshooting Common Issues](#troubleshooting-common-issues)
- [Bypass Datadome with ScrapFly](#bypass-datadome-with-scrapfly)
- [FAQ](#faq)
- [Summary](#summary)
 
    Join the Newsletter  Get monthly web scraping insights 

 

  



Scale Your Web Scraping

Anti-bot bypass, browser rendering, and rotating proxies, all in one API. Start with 1,000 free credits.

  No credit card required  1,000 free API credits  Anti-bot bypass included 

 [Start Free](https://scrapfly.io/register) [View Docs](https://scrapfly.io/docs/onboarding) 

 Not ready? Get our newsletter instead. 

 

## Explore this Article with AI

 [ ChatGPT ](https://chat.openai.com/?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-bypass-datadome-anti-scraping) [ Gemini ](https://www.google.com/search?udm=50&aep=11&q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-bypass-datadome-anti-scraping) [ Grok ](https://x.com/i/grok?text=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-bypass-datadome-anti-scraping) [ Perplexity ](https://www.perplexity.ai/search/new?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-bypass-datadome-anti-scraping) [ Claude ](https://claude.ai/new?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-bypass-datadome-anti-scraping) 



 ## Related Articles

 [  

 blocking 

### How to Bypass Imperva Incapsula when Web Scraping in 2026

In this article we'll take a look at a popular anti bot service Imperva Incapsula anti bot WAF. How does it detect web s...

 

 ](https://scrapfly.io/blog/posts/how-to-bypass-imperva-incapsula-anti-scraping) [     

 http blocking 

### Post-Quantum TLS: Why Scraping Tools Are Now Exposed

Post-quantum TLS is now a live bot detection signal. Modern browsers send X25519MLKEM768 key shares by default, and scra...

 

 ](https://scrapfly.io/blog/posts/post-quantum-tls-bot-detection) [  

 blocking 

### 5 Proven Ways to Bypass CAPTCHA in Python

Captchas can ruin web scrapers but we don't have to teach our robots how to solve them - we can just get around it all!

 

 ](https://scrapfly.io/blog/posts/how-to-bypass-captcha-while-web-scraping) 

  ## Related Questions

- [ Q How to scrape Perimeter X: Please verify you are human? ](https://scrapfly.io/blog/answers/perimeterx-verify-press-and-hold)
 
  



   



 Bypass anti-bot protection automatically, **1,000 free credits** [Start Free](https://scrapfly.io/register)