Playwright's headless browsers leak fingerprint signals like navigator.webdriver, missing plugins, and the HeadlessChrome User-Agent marker that anti-bot systems instantly detect. Stealth plugins patch these leaks, but the ecosystem is split between Python and Node.js with different packages and APIs.
This guide covers stealth setup in both languages with working code, evasion module breakdowns, detection testing, plugin limitations, and what to do when stealth alone is not enough.
Quick Start: If you want a working stealth setup right now, here is a minimal Python async example:
# Install: pip install playwright-stealth && playwright install chromium
import asyncio
from playwright_stealth import Stealth
from playwright.async_api import async_playwright
async def main():
async with Stealth().use_async(async_playwright()) as playwright:
browser = await playwright.chromium.launch(headless=True)
page = await browser.new_page()
await page.goto("https://web-scraping.dev/products")
title = await page.title()
print(f"Page title: {title}")
products = await page.evaluate("""
Array.from(document.querySelectorAll('.product')).slice(0, 5).map(item => ({
title: item.querySelector('h3 a')?.textContent?.trim(),
price: item.querySelector('.price')?.textContent?.trim()
}))
""")
for product in products:
print(f"{product['title']}: {product['price']}")
await browser.close()
asyncio.run(main())The rest of this article explains why stealth patching works, what the plugin modifies, and where the approach breaks down.
Key Takeaways
- Playwright stealth is not built into Playwright — it refers to third-party packages that patch browser fingerprint leaks in Chromium:
playwright-stealthin Python andplaywright-extrawith the stealth plugin in Node.js. - Stealth plugins work by fixing obvious browser-level detection signals like
navigator.webdriver, missing plugins, inconsistent User-Agent data, and unrealistic WebGL or codec fingerprints before page scripts run. - Python is the stronger ecosystem in 2026:
playwright-stealthis actively maintained with a modern context-manager API, while the Node.js stealth stack still relies on packages that have seen little recent maintenance. - Stealth only solves fingerprint-level detection. It does not fix IP reputation, TLS fingerprinting, behavioral analysis, or advanced JavaScript challenges from anti-bot systems like Cloudflare and DataDome.
- For production Playwright scraping, Scrapfly Cloud Browser is the simplest upgrade path once stealth plugins hit their limit, because teams can keep their existing Playwright workflows while offloading browser fingerprinting, proxy routing, and anti-bot bypass to managed infrastructure.
How Websites Detect Playwright
Before discussing solutions, it's important to understand the signals that give Playwright away. Anti-bot systems look at various signals together, and even a single inconsistency can trigger a block.
Key Signals that Websites Use to Detect Playwright
-
navigator.webdriverAutomation frameworks like Playwright set this totrue, easily detected by anti-bot systems. -
User-Agent String Headless Chromium includes
HeadlessChrome. Mismatches between User-Agent and other properties like Client Hints ornavigator.userAgentraise suspicion. -
Browser Plugins and Codecs Real Chrome shows plugins such as PDF Viewer, but headless Chromium reports none. Media codecs and WebGL strings also differ, revealing automation.
-
Behavioral Signals Automated browsers show unnatural patterns, like instant navigation and no mouse movement. Anti-bot services use this data to calculate trust scores.
Tools to Monitor Browser Leaks
You can check what your browser is leaking using the Scrapfly Browser Fingerprint Tool. This tool shows detected automation signals, fingerprint inconsistencies, and suspicious markers in real time.
What Is Playwright Stealth?
Playwright stealth is not a built-in feature of Playwright. There is no stealth mode toggle you can flip in the library itself. The term refers to third-party packages that patch browser fingerprints to make Playwright sessions appear more like genuine user sessions. The ecosystem splits across two languages with different packages and different integration patterns, which causes a fair amount of confusion.
Python: playwright-stealth
The Python package is called playwright-stealth. The package was originally introduced a new context manager API with breaking changes from the older v1.x stealth_async(page) pattern. The latest release is v2.0.2, which continues the v2.x API line.
If you encounter tutorials using stealth_async(page) or stealth_sync(page), those are outdated patterns from v1.x that should not be used with the current version.
The playwright-stealth package is a port, not a wrapper. The package bundles its own JavaScript evasion files that mirror the core evasions from the original Puppeteer stealth plugin, plus a few Python-specific additions like navigator.platform, error.prototype, and chrome.hairline. One important gotcha: chrome.runtime evasion is disabled by default in v2.x because enabling the module can cause compatibility issues on certain sites.
Node.js: playwright-extra + stealth plugin
The Node.js approach uses two packages together playwright-extra a wrapper around Playwright that adds plugin support and puppeteer-extra-plugin-stealth the original stealth plugin from the Puppeteer ecosystem.
The naming is confusing, the Node.js setup uses the actual original Puppeteer stealth plugin directly, not a port. The playwright-extra wrapper makes the stealth plugin compatible with Playwright's API while the stealth plugin code remains the same one used in Puppeteer.
Both packages only work with Chromium. Firefox and WebKit are not supported by either stealth implementation, because the evasion modules target Chrome-specific APIs.
With the ecosystem clear, let us walk through the installation and usage for each language, starting with Python.
Playwright Stealth in Python
The Python ecosystem uses the playwright-stealth package to patch browser fingerprints. It works as a wrapper around Playwright's browser contexts, injecting evasion scripts before any page code runs. Let's go through the setup and usage.
Installation & Setup
Install the stealth package and the Playwright browser binary:
pip install playwright-stealth
playwright install chromiumThe first command installs the stealth package from PyPI. The second command downloads the Chromium browser binary that Playwright controls during automation.
The playwright-stealth package version 2.0+ provides two primary integration patterns: a context manager that wraps async_playwright() or sync_playwright(), and a manual apply_stealth_async() method for situations where the context manager does not fit your code architecture. Both approaches produce the same result: stealth evasions injected before any page scripts execute.
Async API Example
The async pattern is recommended for scraping workloads because it allows concurrent page operations without blocking. The Stealth().use_async() context manager intercepts the Playwright instance and automatically applies all evasion patches to every browser context created within it.
import asyncio
from playwright_stealth import Stealth
from playwright.async_api import async_playwright
async def main():
async with Stealth().use_async(async_playwright()) as playwright:
browser = await playwright.chromium.launch(headless=True)
context = await browser.new_context(
viewport={"width": 1920, "height": 1080}
)
page = await context.new_page()
# Navigate to the target page
await page.goto("https://web-scraping.dev/products")
await page.wait_for_selector(".product")
# Extract product data
products = await page.evaluate("""
Array.from(document.querySelectorAll('.product')).map(item => ({
title: item.querySelector('h3 a')?.textContent?.trim(),
price: item.querySelector('.price')?.textContent?.trim()
}))
""")
for product in products:
print(f"{product['title']}: {product['price']}")
await browser.close()
asyncio.run(main())Every page created within the Stealth().use_async() context manager receives stealth patches before any site scripts execute. There is no need to call a separate function on each page or context.
Sync API Example
The synchronous API works well for quick scripts, prototyping, and workflows that do not need concurrency:
from playwright_stealth import Stealth
from playwright.sync_api import sync_playwright
with Stealth().use_sync(sync_playwright()) as playwright:
browser = playwright.chromium.launch(headless=True)
context = browser.new_context(
viewport={"width": 1920, "height": 1080}
)
page = context.new_page()
page.goto("https://web-scraping.dev/products")
page.wait_for_selector(".product")
title = page.title()
print(f"Page title: {title}")
browser.close()The sync pattern mirrors the async version with blocking calls instead of await. The Stealth().use_sync() context manager applies the same evasion patches automatically to all browser contexts created within the block.
For cases where the context manager pattern does not fit your architecture, the manual application method gives you explicit control over which browser contexts receive stealth patches:
import asyncio
from playwright_stealth import Stealth
from playwright.async_api import async_playwright
async def main():
stealth = Stealth()
async with async_playwright() as playwright:
browser = await playwright.chromium.launch(headless=True)
context = await browser.new_context()
# Manually apply stealth to this specific context
await stealth.apply_stealth_async(context)
page = await context.new_page()
await page.goto("https://web-scraping.dev/products")
print(await page.title())
await browser.close()
asyncio.run(main())The context manager path is cleaner for most cases, but apply_stealth_async() is useful when you need to apply stealth selectively, for example, when some contexts require stealth while others do not. Both patterns produce identical evasion behavior.
With Python covered, the Node.js setup follows the same principles but uses a different integration pattern that reflects its Puppeteer heritage.
Playwright Stealth in Node.js
The Node.js ecosystem uses playwright-extra combined with the puppeteer-extra-plugin-stealth plugin. This setup builds on the same evasion modules originally created for Puppeteer, adapted to work with Playwright's API through a thin wrapper.
Installation & Setup
Install three packages: the Playwright wrapper, Playwright itself, and the stealth plugin:
npm install playwright playwright-extra puppeteer-extra-plugin-stealthThe command installs Playwright along with the playwright-extra wrapper and the stealth plugin. All three packages are required for the stealth setup to work.
The critical detail here is importing from playwright-extra instead of playwright. The playwright-extra wrapper extends Playwright with a plugin system while keeping the rest of the API identical. If you import from the regular playwright package, the stealth plugin will not be applied and you will get no error telling you why.
Stealth Plugin Example
Register the stealth plugin with the chromium launcher before creating any browser instances. Everything launched through the wrapped chromium object will have stealth evasions applied automatically.
// stealth-scraper.mjs
import { chromium } from 'playwright-extra';
import StealthPlugin from 'puppeteer-extra-plugin-stealth';
chromium.use(StealthPlugin());
(async () => {
const browser = await chromium.launch({ headless: true });
const context = await browser.newContext({
viewport: { width: 1920, height: 1080 }
});
const page = await context.newPage();
// Navigate to the target page
await page.goto('https://web-scraping.dev/products');
await page.waitForSelector('.product');
// Extract product data
const products = await page.evaluate(() => {
return Array.from(document.querySelectorAll('.product')).map(item => ({
title: item.querySelector('h3 a')?.textContent?.trim(),
price: item.querySelector('.price')?.textContent?.trim()
}));
});
console.log(products);
await browser.close();
})();// stealth-scraper.cjs
const { chromium } = require('playwright-extra');
const StealthPlugin = require('puppeteer-extra-plugin-stealth');
chromium.use(StealthPlugin());
(async () => {
const browser = await chromium.launch({ headless: true });
const context = await browser.newContext({
viewport: { width: 1920, height: 1080 }
});
const page = await context.newPage();
await page.goto('https://web-scraping.dev/products');
await page.waitForSelector('.product');
const products = await page.evaluate(() => {
return Array.from(document.querySelectorAll('.product')).map(item => ({
title: item.querySelector('h3 a')?.textContent?.trim(),
price: item.querySelector('.price')?.textContent?.trim()
}));
});
console.log(products);
await browser.close();
})();The underlying evasion modules are identical to what the Python package uses. The same core modules run in both languages. The difference is integration: Node.js uses the original puppeteer-extra-plugin-stealth package directly, while Python bundles its own ported JavaScript files.
TypeScript works without additional setup since playwright-extra ships with type declarations. The import syntax is the same as the ESM example above.
With both languages set up, the next question is what exactly these stealth patches modify in the browser environment, and why each modification matters.
What Playwright Stealth Actually Patches
Both packages apply around many evasion modules that modify the browser environment before any website scripts run. When a site blocks you despite stealth being active, knowing what's patched and what's not helps you figure out why.
Chrome API Emulation
Headless Chrome is missing several Chrome-specific APIs that real browsers have. Stealth patches fake these so fingerprinting scripts find what they expect:
chrome.app,chrome.csi,chrome.loadTimes- Chrome-specific APIs that don't exist in headless mode. Sites check for their presence as a quick headless test.chrome.runtime- Patches the extensions API behavior. This one is disabled by default in Python v2.x because it can interfere with some sites. Enable it explicitly if needed.
Navigator Properties
The navigator object is a goldmine for detection scripts. Stealth patches several of its properties:
navigator.webdriver- The big one. Automation frameworks set this totrue, and every anti-bot system checks it.navigator.plugins- Real Chrome reports plugins like PDF Viewer. Headless Chrome reports zero.navigator.languages- Sets realistic language preferences instead of the empty default.navigator.permissions- FixesPermissions.query()which behaves differently in automated browsers.navigator.vendorandnavigator.platform(Python only) - Consistency patches so these values match the User-Agent.navigator.hardwareConcurrency- Masks CPU core count, since cloud environments typically expose 1-2 cores which is unusual for real devices.
Rendering and Media
webgl.vendor- Spoofs WebGL renderer and vendor strings to match real Chrome GPU info.media.codecs- Fakes the supported codec list, which differs between headed and headless Chrome.
Automation Markers
iframe.contentWindow- Fixes cross-origin iframe behavior that differs in automated browsers.defaultArgs/sourceurl(Node.js) - Removes the--enable-automationflag andsourceURLmarkers from injected scripts.error.prototype(Python only) - Patches stack traces that can reveal the automation framework.
User-Agent Consistency
user-agent-override- Patches the User-Agent across all surfaces: HTTP headers,navigator.userAgent, and Client Hints (Sec-CH-UA). This prevents the mismatch problem discussed earlier.
Platform-Specific Extras
chrome.hairline(Python only) - Adds CSS hairline feature detection missing in headless.window.outerdimensions(Node.js only) - FixesouterWidth/outerHeightwhich return 0 in headless mode.
The core 14 modules are shared across both packages. Python adds chrome.hairline, error.prototype, navigator.platform, and sec_ch_ua. Node.js adds defaultArgs, sourceurl, and window.outerdimensions.
Customizing Modules
All modules are enabled by default except chrome.runtime in Python. You can toggle them through the configuration API:
# Python: Constructor params control individual modules
stealth = Stealth(
chrome_runtime=True, # Enable chrome.runtime (disabled by default)
navigator_webdriver=True, # Keep webdriver patch (default: True)
navigator_languages_override=("en-US", "en"), # Custom languages
webgl_vendor_override="Intel Inc.", # Custom WebGL vendor
)// Node.js: enabledEvasions Set controls which modules load
const stealth = StealthPlugin({
enabledEvasions: new Set([
'chrome.app',
'chrome.csi',
'chrome.loadTimes',
'navigator.webdriver',
'navigator.plugins',
'navigator.languages',
'navigator.permissions',
'navigator.vendor',
'media.codecs',
'iframe.contentWindow',
// omit modules you want disabled
])
});
chromium.use(stealth);In Python, you pass keyword arguments to enable or disable each module. In Node.js, you pass a Set of module names and only listed modules activate.
For most scraping tasks, the defaults work fine. Toggling modules is mainly useful for debugging when a site still blocks you despite stealth being active. All evasion modules work exclusively with Chromium. Firefox and WebKit are not supported.
Now that you know what stealth patches, you need to verify that it's actually working. That's what the next section covers.
Testing Your Stealth Setup
Applying stealth patches without verifying them is a common mistake. Detection test sites let you confirm that evasions are working before you point your scraper at a production target and wonder why it gets blocked.
Scrapfly's browser fingerprint tool is the most straightforward verification tool. The page tests browser fingerprint properties including navigator.webdriver, plugin count, language settings, WebGL renderer strings, and several other signals. Results display as green (passed) or red (detected) indicators for each property.
The most reliable way to verify is to take a screenshot of the test page and inspect the results visually:
import asyncio
from playwright_stealth import Stealth
from playwright.async_api import async_playwright
async def verify_stealth():
async with Stealth().use_async(async_playwright()) as playwright:
browser = await playwright.chromium.launch(headless=True)
page = await browser.new_page()
# Run the fingerprint test
await page.goto("https://scrapfly.io/web-scraping-tools/browser-fingerprint")
await page.wait_for_timeout(3000)
await page.screenshot(path="stealth_result.png", full_page=True)
# Verify key properties programmatically
checks = await page.evaluate("""() => ({
webdriver: navigator.webdriver,
pluginCount: navigator.plugins.length,
languages: navigator.languages,
vendor: navigator.vendor
})""")
print(f"webdriver: {checks['webdriver']}")
print(f"plugins: {checks['pluginCount']}")
print(f"languages: {checks['languages']}")
print(f"vendor: {checks['vendor']}")
await browser.close()
asyncio.run(verify_stealth())The script launches a stealth-patched browser and saves a full-page screenshot for visual inspection. The page.evaluate() call extracts key fingerprint values programmatically so you can verify results without opening the screenshot each time.
For a quick verification checklist, here is what to look for in your test results:
navigator.webdriverreturnsfalse(nottrue)- Plugin count is greater than zero (real Chrome reports at least 5 plugins)
- Languages array is populated with realistic values (not empty)
- Vendor string is "Google Inc."
- No "HeadlessChrome" substring in the User-Agent
Limitations of Playwright Stealth
Stealth plugins address fingerprint-level detection by patching JavaScript properties and browser environment signals that reveal automation. Fingerprint-level patching is one layer in a multi-layer detection stack, and being clear about where that layer ends prevents wasted debugging time.
What Stealth Does Not Handle
IP reputation is evaluated before your browser JavaScript even runs. Datacenter IPs, known VPN ranges, and flagged subnets get blocked at the network level. No amount of fingerprint patching changes where your traffic originates from.
TLS fingerprinting analyzes the cryptographic handshake between your client and the server. The JA3 fingerprint produced by Playwright's Chromium does not always match what a real user's Chrome produces, especially across different operating systems and network stacks.
JavaScript challenges from advanced anti-bot systems like Cloudflare go beyond property checks. These systems execute cryptographic proof-of-work challenges, analyze execution timing, and verify results server-side. JavaScript challenges evolve faster than open-source stealth modules can keep pace with.
Behavioral analysis tracks mouse trajectories, scroll patterns, click timing, and navigation sequences. A scraper that loads a page, waits a fixed interval, extracts data, and leaves produces a statistically distinct pattern from real browsing. Stealth patches do nothing to address behavioral detection.
Stealth is a starting point, not a complete solution. The stealth approach eliminates the most obvious detection signals and buys you access to sites with basic protection. For sites running Cloudflare, DataDome, or PerimeterX, the limitations above become the reason your scraper gets blocked, not a missing stealth configuration option.
When stealth plugins hit their ceiling, the question becomes what replaces them at production scale. Managed browser infrastructure fills that gap.
Beyond Stealth Plugins: Scaling with Scrapfly
Stealth plugins don't scale well. Scrapfly's cloud browsers handle resource management, fingerprint rotation, and anti-bot bypasses so you don't have to.
The Scrapfly Cloud Browser API addresses these pain points while letting Playwright developers keep their existing code.
import asyncio
from playwright.async_api import async_playwright
async def main():
async with async_playwright() as playwright:
browser = await playwright.chromium.connect_over_cdp(
"wss://browser.scrapfly.io?api_key=YOUR_API_KEY&proxy_pool=residential&os=windows"
)
page = await browser.new_page()
await page.goto("https://web-scraping.dev/products")
await page.wait_for_selector(".product")
products = await page.evaluate("""
Array.from(document.querySelectorAll('.product')).map(item => ({
title: item.querySelector('h3 a')?.textContent?.trim(),
price: item.querySelector('.price')?.textContent?.trim()
}))
""")
for product in products:
print(f"{product['title']}: {product['price']}")
await browser.close()
asyncio.run(main())The code above connects to the Scrapfly Cloud Browser through a CDP WebSocket URL instead of launching a local Chromium instance. The proxy_pool=residential and os=windows parameters configure proxy routing and operating system fingerprint at the connection level.
Not every scraping task needs a full browser. For static pages or JavaScript-rendered content that does not require multi-step browser interaction, the Scrapfly Scrape API handles anti-bot bypass with asp=true and JavaScript rendering with render_js=true through simple HTTP calls.
FAQ
Does playwright-stealth still work in 2026?
Yes, but it depends on the language. The Python package is actively maintained (v2.0.2, regular commits through 2025-2026) and works well against basic fingerprint checks. The Node.js packages last released in March 2023 and haven't received new evasion modules since, so they lag behind newer anti-bot techniques.
What is the difference between playwright-stealth and playwright-extra?
playwright-stealth is the Python package that uses a Stealth().use_async() context manager. playwright-extra is the Node.js wrapper that adds plugin support via chromium.use(StealthPlugin()). The underlying evasion modules are largely the same, but the Python version is a port with its own bundled JS files while Node.js uses the original Puppeteer stealth plugin directly.
Can Playwright bypass Cloudflare bot detection?
Stealth plugins alone usually cannot. Cloudflare uses TLS fingerprinting, cryptographic challenges, and server-side behavioral analysis that operate below where stealth plugins can intervene. For Cloudflare-protected sites, use a managed service like Scrapfly with asp=true or combine TLS-resistant clients with residential proxies.
Is playwright-stealth the same as puppeteer-stealth?
They share the same core evasion modules. The Python package is a port with a few additions (navigator.platform, error.prototype, chrome.hairline), while the Node.js setup uses the original Puppeteer stealth plugin directly through the playwright-extra wrapper. Around 14 of 17 modules are identical.
Does Playwright stealth work with Firefox or WebKit?
No. Stealth plugins only work with Chromium. The evasion modules target Chrome-specific APIs and runtime behavior. Firefox and WebKit have entirely different internals and would require a separate set of modules.
Summary
In this guide, we covered how to set up Playwright stealth in both Python and Node.js, what each evasion module patches, and how to verify your setup against detection test pages. Here are the key takeaways:
playwright-stealth(Python) is actively maintained and works well for basic to moderate fingerprint evasion.playwright-extrawithpuppeteer-extra-plugin-stealth(Node.js) shares the same core modules but hasn't been updated since 2023.- Stealth plugins patch browser fingerprints like
navigator.webdriver, plugins, and User-Agent consistency, but they can't address TLS fingerprinting, behavioral analysis, or advanced challenges from services like Cloudflare. - Both packages only support Chromium.
For simple scraping tasks, stealth plugins are a solid starting point. When you need to scale or bypass advanced anti-bot systems, Scrapfly's cloud browsers handle fingerprint management, proxy rotation, and anti-bot bypasses out of the box so you can focus on extracting data.
Legal Disclaimer and Precautions
This tutorial covers popular web scraping techniques for education. Interacting with public servers requires diligence and respect:
- Do not scrape at rates that could damage the website.
- Do not scrape data that's not available publicly.
- Do not store PII of EU citizens protected by GDPR.
- Do not repurpose entire public datasets which can be illegal in some countries.
Scrapfly does not offer legal advice but these are good general rules to follow. For more you should consult a lawyer.
