     [Blog](https://scrapfly.io/blog)   /  [blocking](https://scrapfly.io/blog/tag/blocking)   /  [Playwright Stealth: Bypass Bot Detection in Python &amp; Node.js](https://scrapfly.io/blog/posts/playwright-stealth-bypass-bot-detection)   # Playwright Stealth: Bypass Bot Detection in Python &amp; Node.js

 by [Ziad Shamndy](https://scrapfly.io/blog/author/ziad) Apr 28, 2026 19 min read [\#blocking](https://scrapfly.io/blog/tag/blocking) [\#headless-browser](https://scrapfly.io/blog/tag/headless-browser) [\#nodejs](https://scrapfly.io/blog/tag/nodejs) [\#python](https://scrapfly.io/blog/tag/python) 

 [  ](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fplaywright-stealth-bypass-bot-detection "Share on LinkedIn")    

 

 

         

  **Browser API**Automate browsers in the cloud with full JavaScript rendering support.

 

 [ Learn More  ](https://scrapfly.io/products/cloud-browser-api) [  Docs ](https://scrapfly.io/docs/scrape-api/javascript-rendering) 

 

 

[Playwright](https://playwright.dev/)'s headless browsers leak fingerprint signals like `navigator.webdriver`, missing plugins, and the `HeadlessChrome` User-Agent marker that anti-bot systems instantly detect. Stealth plugins patch these leaks, but the ecosystem is split between Python and Node.js with different packages and APIs.

This guide covers stealth setup in both languages with working code, evasion module breakdowns, detection testing, plugin limitations, and what to do when stealth alone is not enough.

[Web Scraping with Playwright and PythonPlaywright is the new, big browser automation toolkit - can it be used for web scraping? In this introduction article, we'll take a look how can we use Playwright and Python to scrape dynamic websites.](https://scrapfly.io/blog/posts/web-scraping-with-playwright-and-python)

**Quick Start**: If you want a working stealth setup right now, here is a minimal Python async example:

python```python
# Install: pip install playwright-stealth && playwright install chromium
import asyncio
from playwright_stealth import Stealth
from playwright.async_api import async_playwright


async def main():
    async with Stealth().use_async(async_playwright()) as playwright:
        browser = await playwright.chromium.launch(headless=True)
        page = await browser.new_page()
        await page.goto("https://web-scraping.dev/products")
        title = await page.title()
        print(f"Page title: {title}")
        products = await page.evaluate("""
            Array.from(document.querySelectorAll('.product')).slice(0, 5).map(item => ({
                title: item.querySelector('h3 a')?.textContent?.trim(),
                price: item.querySelector('.price')?.textContent?.trim()
            }))
        """)
        for product in products:
            print(f"{product['title']}: {product['price']}")
        await browser.close()

asyncio.run(main())
```



The rest of this article explains why stealth patching works, what the plugin modifies, and where the approach breaks down.

## Key Takeaways

- **Playwright stealth is not built into Playwright** — it refers to third-party packages that patch browser fingerprint leaks in Chromium: `playwright-stealth` in Python and `playwright-extra` with the stealth plugin in Node.js.
- **Stealth plugins work by fixing obvious browser-level detection signals** like `navigator.webdriver`, missing plugins, inconsistent User-Agent data, and unrealistic WebGL or codec fingerprints before page scripts run.
- **Python is the stronger ecosystem in 2026**: `playwright-stealth` is actively maintained with a modern context-manager API, while the Node.js stealth stack still relies on packages that have seen little recent maintenance.
- **Stealth only solves fingerprint-level detection**. It does not fix IP reputation, TLS fingerprinting, behavioral analysis, or advanced JavaScript challenges from anti-bot systems like Cloudflare and DataDome.
- **For production Playwright scraping, [Scrapfly Cloud Browser](https://scrapfly.io/browser-api) is the simplest upgrade path once stealth plugins hit their limit**, because teams can keep their existing Playwright workflows while offloading browser fingerprinting, proxy routing, and anti-bot bypass to managed infrastructure.

**Get web scraping tips in your inbox**Trusted by 100K+ developers and 30K+ enterprises. Unsubscribe anytime.





## How Websites Detect Playwright

Before discussing solutions, it's important to understand the signals that give Playwright away. Anti-bot systems look at various signals together, and even a single inconsistency can trigger a block.

### Key Signals that Websites Use to Detect Playwright

1. **`navigator.webdriver`**Automation frameworks like Playwright set this to `true`, easily detected by anti-bot systems.
2. **User-Agent String**Headless Chromium includes `HeadlessChrome`. Mismatches between User-Agent and other properties like Client Hints or `navigator.userAgent` raise suspicion.
3. **Browser Plugins and Codecs**Real Chrome shows plugins such as PDF Viewer, but headless Chromium reports none. Media codecs and WebGL strings also differ, revealing automation.
4. **Behavioral Signals**Automated browsers show unnatural patterns, like instant navigation and no mouse movement. Anti-bot services use this data to calculate trust scores.

### Tools to Monitor Browser Leaks

You can check what your browser is leaking using the **[Scrapfly Browser Fingerprint Tool](https://scrapfly.io/web-scraping-tools/browser-fingerprint)**. This tool shows detected automation signals, fingerprint inconsistencies, and suspicious markers in real time.

[How to Know What Anti-Bot Service a Website is Using?In this article we'll take a look at two popular tools: WhatWaf and Wafw00f which can identify what WAF service is used.](https://scrapfly.io/blog/posts/how-to-know-what-anti-bot-website-uses)

## What Is Playwright Stealth?

Playwright stealth is not a built-in feature of Playwright. There is no stealth mode toggle you can flip in the library itself. The term refers to third-party packages that patch browser fingerprints to make Playwright sessions appear more like genuine user sessions. The ecosystem splits across two languages with different packages and different integration patterns, which causes a fair amount of confusion.

### Python: playwright-stealth

The Python package is called [playwright-stealth](https://pypi.org/project/playwright-stealth/). The package was originally introduced a new context manager API with breaking changes from the older v1.x `stealth_async(page)` pattern. The latest release is v2.0.2, which continues the v2.x API line.

If you encounter tutorials using `stealth_async(page)` or `stealth_sync(page)`, those are outdated patterns from v1.x that should not be used with the current version.

The `playwright-stealth` package is a port, not a wrapper. The package bundles its own JavaScript evasion files that mirror the core evasions from the original Puppeteer stealth plugin, plus a few Python-specific additions like `navigator.platform`, `error.prototype`, and `chrome.hairline`. One important gotcha: `chrome.runtime` evasion is disabled by default in v2.x because enabling the module can cause compatibility issues on certain sites.

### Node.js: playwright-extra + stealth plugin

The Node.js approach uses two packages together [playwright-extra](https://www.npmjs.com/package/playwright-extra) a wrapper around Playwright that adds plugin support and [puppeteer-extra-plugin-stealth](https://www.npmjs.com/package/puppeteer-extra-plugin-stealth) the original stealth plugin from the Puppeteer ecosystem.

The naming is confusing, the Node.js setup uses the actual original Puppeteer stealth plugin directly, not a port. The `playwright-extra` wrapper makes the stealth plugin compatible with Playwright's API while the stealth plugin code remains the same one used in Puppeteer.

Both packages only work with Chromium. Firefox and WebKit are not supported by either stealth implementation, because the evasion modules target Chrome-specific APIs.

[Puppeteer Stealth: Complete Guide to Avoiding DetectionComplete guide to puppeteer-extra-plugin-stealth for avoiding bot detection. Learn how detection works, configure stealth evasion modules, implement complementary techniques, and scale with cloud browsers.](https://scrapfly.io/blog/posts/puppeteer-stealth-complete-guide)

With the ecosystem clear, let us walk through the installation and usage for each language, starting with Python.

## Playwright Stealth in Python

The Python ecosystem uses the [playwright-stealth](https://pypi.org/project/playwright-stealth/) package to patch browser fingerprints. It works as a wrapper around Playwright's browser contexts, injecting evasion scripts before any page code runs. Let's go through the setup and usage.

### Installation &amp; Setup

Install the stealth package and the Playwright browser binary:

shell```shell
pip install playwright-stealth
playwright install chromium
```



The first command installs the stealth package from PyPI. The second command downloads the Chromium browser binary that Playwright controls during automation.

The `playwright-stealth` package version 2.0+ provides two primary integration patterns: a context manager that wraps `async_playwright()` or `sync_playwright()`, and a manual `apply_stealth_async()` method for situations where the context manager does not fit your code architecture. Both approaches produce the same result: stealth evasions injected before any page scripts execute.

### Async API Example

The async pattern is recommended for scraping workloads because it allows concurrent page operations without blocking. The `Stealth().use_async()` context manager intercepts the Playwright instance and automatically applies all evasion patches to every browser context created within it.

python```python
import asyncio
from playwright_stealth import Stealth
from playwright.async_api import async_playwright


async def main():
    async with Stealth().use_async(async_playwright()) as playwright:
        browser = await playwright.chromium.launch(headless=True)
        context = await browser.new_context(
            viewport={"width": 1920, "height": 1080}
        )
        page = await context.new_page()

        # Navigate to the target page
        await page.goto("https://web-scraping.dev/products")
        await page.wait_for_selector(".product")

        # Extract product data
        products = await page.evaluate("""
            Array.from(document.querySelectorAll('.product')).map(item => ({
                title: item.querySelector('h3 a')?.textContent?.trim(),
                price: item.querySelector('.price')?.textContent?.trim()
            }))
        """)

        for product in products:
            print(f"{product['title']}: {product['price']}")

        await browser.close()


asyncio.run(main())
```



Every page created within the `Stealth().use_async()` context manager receives stealth patches before any site scripts execute. There is no need to call a separate function on each page or context.

### Sync API Example

The synchronous API works well for quick scripts, prototyping, and workflows that do not need concurrency:

python```python
from playwright_stealth import Stealth
from playwright.sync_api import sync_playwright

with Stealth().use_sync(sync_playwright()) as playwright:
    browser = playwright.chromium.launch(headless=True)
    context = browser.new_context(
        viewport={"width": 1920, "height": 1080}
    )
    page = context.new_page()

    page.goto("https://web-scraping.dev/products")
    page.wait_for_selector(".product")

    title = page.title()
    print(f"Page title: {title}")

    browser.close()
```



The sync pattern mirrors the async version with blocking calls instead of `await`. The `Stealth().use_sync()` context manager applies the same evasion patches automatically to all browser contexts created within the block.

For cases where the context manager pattern does not fit your architecture, the manual application method gives you explicit control over which browser contexts receive stealth patches:

python```python
import asyncio
from playwright_stealth import Stealth
from playwright.async_api import async_playwright


async def main():
    stealth = Stealth()
    async with async_playwright() as playwright:
        browser = await playwright.chromium.launch(headless=True)
        context = await browser.new_context()
        # Manually apply stealth to this specific context
        await stealth.apply_stealth_async(context)

        page = await context.new_page()
        await page.goto("https://web-scraping.dev/products")
        print(await page.title())

        await browser.close()


asyncio.run(main())
```



The context manager path is cleaner for most cases, but `apply_stealth_async()` is useful when you need to apply stealth selectively, for example, when some contexts require stealth while others do not. Both patterns produce identical evasion behavior.

With Python covered, the Node.js setup follows the same principles but uses a different integration pattern that reflects its Puppeteer heritage.

## Playwright Stealth in Node.js

The Node.js ecosystem uses [playwright-extra](https://www.npmjs.com/package/playwright-extra) combined with the [puppeteer-extra-plugin-stealth](https://www.npmjs.com/package/puppeteer-extra-plugin-stealth) plugin. This setup builds on the same evasion modules originally created for Puppeteer, adapted to work with Playwright's API through a thin wrapper.

### Installation &amp; Setup

Install three packages: the Playwright wrapper, Playwright itself, and the stealth plugin:

shell```shell
npm install playwright playwright-extra puppeteer-extra-plugin-stealth
```



The command installs Playwright along with the `playwright-extra` wrapper and the stealth plugin. All three packages are required for the stealth setup to work.

The critical detail here is importing from `playwright-extra` instead of `playwright`. The `playwright-extra` wrapper extends Playwright with a plugin system while keeping the rest of the API identical. If you import from the regular `playwright` package, the stealth plugin will not be applied and you will get no error telling you why.

### Stealth Plugin Example

Register the stealth plugin with the `chromium` launcher before creating any browser instances. Everything launched through the wrapped `chromium` object will have stealth evasions applied automatically.

ESM (Modern JavaScript)

CommonJS

javascript```javascript
// stealth-scraper.mjs
import { chromium } from 'playwright-extra';
import StealthPlugin from 'puppeteer-extra-plugin-stealth';

chromium.use(StealthPlugin());

(async () => {
    const browser = await chromium.launch({ headless: true });
    const context = await browser.newContext({
        viewport: { width: 1920, height: 1080 }
    });
    const page = await context.newPage();

    // Navigate to the target page
    await page.goto('https://web-scraping.dev/products');
    await page.waitForSelector('.product');

    // Extract product data
    const products = await page.evaluate(() => {
        return Array.from(document.querySelectorAll('.product')).map(item => ({
            title: item.querySelector('h3 a')?.textContent?.trim(),
            price: item.querySelector('.price')?.textContent?.trim()
        }));
    });

    console.log(products);
    await browser.close();
})();
```





javascript```javascript
// stealth-scraper.cjs
const { chromium } = require('playwright-extra');
const StealthPlugin = require('puppeteer-extra-plugin-stealth');

chromium.use(StealthPlugin());

(async () => {
    const browser = await chromium.launch({ headless: true });
    const context = await browser.newContext({
        viewport: { width: 1920, height: 1080 }
    });
    const page = await context.newPage();

    await page.goto('https://web-scraping.dev/products');
    await page.waitForSelector('.product');

    const products = await page.evaluate(() => {
        return Array.from(document.querySelectorAll('.product')).map(item => ({
            title: item.querySelector('h3 a')?.textContent?.trim(),
            price: item.querySelector('.price')?.textContent?.trim()
        }));
    });

    console.log(products);
    await browser.close();
})();
```







The underlying evasion modules are identical to what the Python package uses. The same core modules run in both languages. The difference is integration: Node.js uses the original `puppeteer-extra-plugin-stealth` package directly, while Python bundles its own ported JavaScript files.

TypeScript works without additional setup since `playwright-extra` ships with type declarations. The import syntax is the same as the ESM example above.

[Web Scraping with Playwright and JavaScriptLearn about Playwright - a browser automation toolkit for server side Javascript like NodeJS, Deno or Bun.](https://scrapfly.io/blog/posts/web-scraping-with-playwright-and-javascript)

With both languages set up, the next question is what exactly these stealth patches modify in the browser environment, and why each modification matters.

## What Playwright Stealth Actually Patches

Both packages apply around many evasion modules that modify the browser environment before any website scripts run. When a site blocks you despite stealth being active, knowing what's patched and what's not helps you figure out why.

### Chrome API Emulation

Headless Chrome is missing several Chrome-specific APIs that real browsers have. Stealth patches fake these so fingerprinting scripts find what they expect:

- **`chrome.app`**, **`chrome.csi`**, **`chrome.loadTimes`** - Chrome-specific APIs that don't exist in headless mode. Sites check for their presence as a quick headless test.
- **`chrome.runtime`** - Patches the extensions API behavior. This one is disabled by default in Python v2.x because it can interfere with some sites. Enable it explicitly if needed.

### Navigator Properties

The `navigator` object is a goldmine for detection scripts. Stealth patches several of its properties:

- **`navigator.webdriver`** - The big one. Automation frameworks set this to `true`, and every anti-bot system checks it.
- **`navigator.plugins`** - Real Chrome reports plugins like PDF Viewer. Headless Chrome reports zero.
- **`navigator.languages`** - Sets realistic language preferences instead of the empty default.
- **`navigator.permissions`** - Fixes `Permissions.query()` which behaves differently in automated browsers.
- **`navigator.vendor`** and **`navigator.platform`** (Python only) - Consistency patches so these values match the User-Agent.
- **`navigator.hardwareConcurrency`** - Masks CPU core count, since cloud environments typically expose 1-2 cores which is unusual for real devices.

### Rendering and Media

- **`webgl.vendor`** - Spoofs WebGL renderer and vendor strings to match real Chrome GPU info.
- **`media.codecs`** - Fakes the supported codec list, which differs between headed and headless Chrome.

### Automation Markers

- **`iframe.contentWindow`** - Fixes cross-origin iframe behavior that differs in automated browsers.
- **`defaultArgs` / `sourceurl`** (Node.js) - Removes the `--enable-automation` flag and `sourceURL` markers from injected scripts.
- **`error.prototype`** (Python only) - Patches stack traces that can reveal the automation framework.

### User-Agent Consistency

- **`user-agent-override`** - Patches the User-Agent across all surfaces: HTTP headers, `navigator.userAgent`, and Client Hints (`Sec-CH-UA`). This prevents the mismatch problem discussed earlier.

### Platform-Specific Extras

- **`chrome.hairline`** (Python only) - Adds CSS hairline feature detection missing in headless.
- **`window.outerdimensions`** (Node.js only) - Fixes `outerWidth`/`outerHeight` which return 0 in headless mode.

The core 14 modules are shared across both packages. Python adds `chrome.hairline`, `error.prototype`, `navigator.platform`, and `sec_ch_ua`. Node.js adds `defaultArgs`, `sourceurl`, and `window.outerdimensions`.

### Customizing Modules

All modules are enabled by default except `chrome.runtime` in Python. You can toggle them through the configuration API:

python```python
# Python: Constructor params control individual modules
stealth = Stealth(
    chrome_runtime=True,          # Enable chrome.runtime (disabled by default)
    navigator_webdriver=True,     # Keep webdriver patch (default: True)
    navigator_languages_override=("en-US", "en"),  # Custom languages
    webgl_vendor_override="Intel Inc.",  # Custom WebGL vendor
)
```



javascript```javascript
// Node.js: enabledEvasions Set controls which modules load
const stealth = StealthPlugin({
    enabledEvasions: new Set([
        'chrome.app',
        'chrome.csi',
        'chrome.loadTimes',
        'navigator.webdriver',
        'navigator.plugins',
        'navigator.languages',
        'navigator.permissions',
        'navigator.vendor',
        'media.codecs',
        'iframe.contentWindow',
        // omit modules you want disabled
    ])
});
chromium.use(stealth);
```



In Python, you pass keyword arguments to enable or disable each module. In Node.js, you pass a `Set` of module names and only listed modules activate.

For most scraping tasks, the defaults work fine. Toggling modules is mainly useful for debugging when a site still blocks you despite stealth being active. All evasion modules work exclusively with Chromium. Firefox and WebKit are not supported.

Now that you know what stealth patches, you need to verify that it's actually working. That's what the next section covers.

## Testing Your Stealth Setup

Applying stealth patches without verifying them is a common mistake. Detection test sites let you confirm that evasions are working before you point your scraper at a production target and wonder why it gets blocked.

[Scrapfly's browser fingerprint tool](https://scrapfly.io/web-scraping-tools/browser-fingerprint) is the most straightforward verification tool. The page tests browser fingerprint properties including `navigator.webdriver`, plugin count, language settings, WebGL renderer strings, and several other signals. Results display as green (passed) or red (detected) indicators for each property.

The most reliable way to verify is to take a screenshot of the test page and inspect the results visually:

python```python
import asyncio
from playwright_stealth import Stealth
from playwright.async_api import async_playwright


async def verify_stealth():
    async with Stealth().use_async(async_playwright()) as playwright:
        browser = await playwright.chromium.launch(headless=True)
        page = await browser.new_page()

        # Run the fingerprint test
        await page.goto("https://scrapfly.io/web-scraping-tools/browser-fingerprint")
        await page.wait_for_timeout(3000)
        await page.screenshot(path="stealth_result.png", full_page=True)

        # Verify key properties programmatically
        checks = await page.evaluate("""() => ({
            webdriver: navigator.webdriver,
            pluginCount: navigator.plugins.length,
            languages: navigator.languages,
            vendor: navigator.vendor
        })""")

        print(f"webdriver: {checks['webdriver']}")
        print(f"plugins: {checks['pluginCount']}")
        print(f"languages: {checks['languages']}")
        print(f"vendor: {checks['vendor']}")

        await browser.close()


asyncio.run(verify_stealth())
```



The script launches a stealth-patched browser and saves a full-page screenshot for visual inspection. The `page.evaluate()` call extracts key fingerprint values programmatically so you can verify results without opening the screenshot each time.

For a quick verification checklist, here is what to look for in your test results:

- `navigator.webdriver` returns `false` (not `true`)
- Plugin count is greater than zero (real Chrome reports at least 5 plugins)
- Languages array is populated with realistic values (not empty)
- Vendor string is "Google Inc."
- No "HeadlessChrome" substring in the User-Agent

## Limitations of Playwright Stealth

Stealth plugins address fingerprint-level detection by patching JavaScript properties and browser environment signals that reveal automation. Fingerprint-level patching is one layer in a multi-layer detection stack, and being clear about where that layer ends prevents wasted debugging time.

### What Stealth Does Not Handle

**IP reputation** is evaluated before your browser JavaScript even runs. Datacenter IPs, known VPN ranges, and flagged subnets get blocked at the network level. No amount of fingerprint patching changes where your traffic originates from.

**TLS fingerprinting** analyzes the cryptographic handshake between your client and the server. The JA3 fingerprint produced by Playwright's Chromium does not always match what a real user's Chrome produces, especially across different operating systems and network stacks.

**JavaScript challenges** from advanced anti-bot systems like Cloudflare go beyond property checks. These systems execute cryptographic proof-of-work challenges, analyze execution timing, and verify results server-side. JavaScript challenges evolve faster than open-source stealth modules can keep pace with.

**Behavioral analysis** tracks mouse trajectories, scroll patterns, click timing, and navigation sequences. A scraper that loads a page, waits a fixed interval, extracts data, and leaves produces a statistically distinct pattern from real browsing. Stealth patches do nothing to address behavioral detection.

Stealth is a starting point, not a complete solution. The stealth approach eliminates the most obvious detection signals and buys you access to sites with basic protection. For sites running Cloudflare, DataDome, or PerimeterX, the limitations above become the reason your scraper gets blocked, not a missing stealth configuration option.

[5 Tools to Scrape Without Blocking and How it All WorksTutorial on how to avoid web scraper blocking. What is javascript and TLS (JA3) fingerprinting and what role request headers play in blocking.](https://scrapfly.io/blog/posts/how-to-scrape-without-getting-blocked-tutorial)

When stealth plugins hit their ceiling, the question becomes what replaces them at production scale. Managed browser infrastructure fills that gap.

## Beyond Stealth Plugins: Scaling with Scrapfly

Stealth plugins don't scale well. Scrapfly's cloud browsers handle resource management, fingerprint rotation, and anti-bot bypasses so you don't have to.



The Scrapfly Cloud Browser API addresses these pain points while letting Playwright developers keep their existing code.

python```python
import asyncio
from playwright.async_api import async_playwright


async def main():
    async with async_playwright() as playwright:
        browser = await playwright.chromium.connect_over_cdp(
            "wss://browser.scrapfly.io?api_key=YOUR_API_KEY&proxy_pool=residential&os=windows"
        )
        page = await browser.new_page()
        await page.goto("https://web-scraping.dev/products")
        await page.wait_for_selector(".product")

        products = await page.evaluate("""
            Array.from(document.querySelectorAll('.product')).map(item => ({
                title: item.querySelector('h3 a')?.textContent?.trim(),
                price: item.querySelector('.price')?.textContent?.trim()
            }))
        """)

        for product in products:
            print(f"{product['title']}: {product['price']}")

        await browser.close()


asyncio.run(main())
```



The code above connects to the Scrapfly Cloud Browser through a CDP WebSocket URL instead of launching a local Chromium instance. The `proxy_pool=residential` and `os=windows` parameters configure proxy routing and operating system fingerprint at the connection level.

Not every scraping task needs a full browser. For static pages or JavaScript-rendered content that does not require multi-step browser interaction, the Scrapfly Scrape API handles anti-bot bypass with `asp=true` and JavaScript rendering with `render_js=true` through simple HTTP calls.



## FAQ

Does playwright-stealth still work in 2026?Yes, but it depends on the language. The Python package is actively maintained (v2.0.2, regular commits through 2025-2026) and works well against basic fingerprint checks. The Node.js packages last released in March 2023 and haven't received new evasion modules since, so they lag behind newer anti-bot techniques.







What is the difference between playwright-stealth and playwright-extra?`playwright-stealth` is the Python package that uses a `Stealth().use_async()` context manager. `playwright-extra` is the Node.js wrapper that adds plugin support via `chromium.use(StealthPlugin())`. The underlying evasion modules are largely the same, but the Python version is a port with its own bundled JS files while Node.js uses the original Puppeteer stealth plugin directly.







Can Playwright bypass Cloudflare bot detection?Stealth plugins alone usually cannot. Cloudflare uses TLS fingerprinting, cryptographic challenges, and server-side behavioral analysis that operate below where stealth plugins can intervene. For Cloudflare-protected sites, use a managed service like Scrapfly with `asp=true` or combine TLS-resistant clients with residential proxies.







Is playwright-stealth the same as puppeteer-stealth?They share the same core evasion modules. The Python package is a port with a few additions (`navigator.platform`, `error.prototype`, `chrome.hairline`), while the Node.js setup uses the original Puppeteer stealth plugin directly through the `playwright-extra` wrapper. Around 14 of 17 modules are identical.







Does Playwright stealth work with Firefox or WebKit?No. Stealth plugins only work with Chromium. The evasion modules target Chrome-specific APIs and runtime behavior. Firefox and WebKit have entirely different internals and would require a separate set of modules.









## Summary

In this guide, we covered how to set up Playwright stealth in both Python and Node.js, what each evasion module patches, and how to verify your setup against detection test pages. Here are the key takeaways:

- `playwright-stealth` (Python) is actively maintained and works well for basic to moderate fingerprint evasion.
- `playwright-extra` with `puppeteer-extra-plugin-stealth` (Node.js) shares the same core modules but hasn't been updated since 2023.
- Stealth plugins patch browser fingerprints like `navigator.webdriver`, plugins, and User-Agent consistency, but they can't address TLS fingerprinting, behavioral analysis, or advanced challenges from services like Cloudflare.
- Both packages only support Chromium.

For simple scraping tasks, stealth plugins are a solid starting point. When you need to scale or bypass advanced anti-bot systems, Scrapfly's cloud browsers handle fingerprint management, proxy rotation, and anti-bot bypasses out of the box so you can focus on extracting data.

Legal Disclaimer and PrecautionsThis tutorial covers popular web scraping techniques for education. Interacting with public servers requires diligence and respect:

- Do not scrape at rates that could damage the website.
- Do not scrape data that's not available publicly.
- Do not store PII of EU citizens protected by GDPR.
- Do not repurpose *entire* public datasets which can be illegal in some countries.

Scrapfly does not offer legal advice but these are good general rules to follow. For more you should consult a lawyer.



 

    Table of Contents- [Key Takeaways](#key-takeaways)
- [How Websites Detect Playwright](#how-websites-detect-playwright)
- [Key Signals that Websites Use to Detect Playwright](#key-signals-that-websites-use-to-detect-playwright)
- [Tools to Monitor Browser Leaks](#tools-to-monitor-browser-leaks)
- [What Is Playwright Stealth?](#what-is-playwright-stealth)
- [Python: playwright-stealth](#python-playwright-stealth)
- [Node.js: playwright-extra + stealth plugin](#node-js-playwright-extra-stealth-plugin)
- [Playwright Stealth in Python](#playwright-stealth-in-python)
- [Installation &amp;amp; Setup](#installation-amp-setup)
- [Async API Example](#async-api-example)
- [Sync API Example](#sync-api-example)
- [Playwright Stealth in Node.js](#playwright-stealth-in-node-js)
- [Installation &amp;amp; Setup](#installation-amp-setup)
- [Stealth Plugin Example](#stealth-plugin-example)
- [What Playwright Stealth Actually Patches](#what-playwright-stealth-actually-patches)
- [Customizing Modules](#customizing-modules)
- [Testing Your Stealth Setup](#testing-your-stealth-setup)
- [Limitations of Playwright Stealth](#limitations-of-playwright-stealth)
- [Beyond Stealth Plugins: Scaling with Scrapfly](#beyond-stealth-plugins-scaling-with-scrapfly)
- [FAQ](#faq)
- [Summary](#summary)
 
    Join the Newsletter  Get monthly web scraping insights 

 

  



Scale Your Web Scraping

Anti-bot bypass, browser rendering, and rotating proxies, all in one API. Start with 1,000 free credits.

  No credit card required  1,000 free API credits  Anti-bot bypass included 

 [Start Free](https://scrapfly.io/register) [View Docs](https://scrapfly.io/docs/onboarding) 

 Not ready? Get our newsletter instead. 

 

## Explore this Article with AI

 [ ChatGPT ](https://chat.openai.com/?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fplaywright-stealth-bypass-bot-detection) [ Gemini ](https://www.google.com/search?udm=50&aep=11&q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fplaywright-stealth-bypass-bot-detection) [ Grok ](https://x.com/i/grok?text=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fplaywright-stealth-bypass-bot-detection) [ Perplexity ](https://www.perplexity.ai/search/new?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fplaywright-stealth-bypass-bot-detection) [ Claude ](https://claude.ai/new?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fplaywright-stealth-bypass-bot-detection) 



 ## Related Articles

 [     

 blocking nodejs 

### Puppeteer Stealth: Complete Guide to Avoiding Detection

Complete guide to puppeteer-extra-plugin-stealth for avoiding bot detection. Learn how detection works, configure stealt...

 

 ](https://scrapfly.io/blog/posts/puppeteer-stealth-complete-guide) [  

 headless-browser scaling 

### Web Scraping With Cloud Browsers

Introduction cloud browsers and their benefits and a step-by-step setup with self-hosted Selenium-grid cloud browsers.

 

 ](https://scrapfly.io/blog/posts/web-scraping-with-cloud-browsers) [  

 blocking nodejs 

### Web Scraping With Node-Unblocker

Tutorial on using Node-Unblocker - a nodejs library - to avoid blocking while web scraping and using it to optimize web ...

 

 ](https://scrapfly.io/blog/posts/web-scraping-with-node-unblocker) 

  ## Related Questions

- [ Q Getting started with Puppeteer Stealth ](https://scrapfly.io/blog/answers/how-to-use-puppeteer-stealth-what-does-it-do)
- [ Q How to use headless browsers with scrapy? ](https://scrapfly.io/blog/answers/how-to-use-headless-browsers-with-scrapy)
 
  



   



 Bypass anti-bot protection automatically, **1,000 free credits** [Start Free](https://scrapfly.io/register)