🚀 We are hiring! See open positions

HTTP/2 and HTTP/3 Fingerprinting: Protocol-Level Bot Detection

by Ziad Shamndy Apr 08, 2026 13 min read
HTTP/2 and HTTP/3 Fingerprinting: Protocol-Level Bot Detection HTTP/2 and HTTP/3 Fingerprinting: Protocol-Level Bot Detection

When your scraper opens an HTTP/2 connection, the protocol settings it sends are as identifiable as a fingerprint. Anti-bot services read these signals before any page content is exchanged, and no amount of header spoofing can hide what the protocol layer exposes.

In this guide, we will break down how HTTP/2 fingerprinting works across its four core components, how HTTP/3 and QUIC introduce new detection vectors, and how major anti-bot vendors like Cloudflare, Akamai, and DataDome combine protocol fingerprints into a multi-layer detection stack.

Key Takeaways

  • HTTP/2 and HTTP/3 fingerprinting identify scrapers at the connection layer, using SETTINGS frames, flow-control values, pseudo-header ordering, and QUIC transport parameters that most HTTP clients do not match to real browsers.
  • Header spoofing is not enough: a request that claims Chrome or Firefox can still be flagged before any page content loads if its protocol fingerprint does not match the browser it claims to be.
  • The strongest detection signal is a cross-layer mismatch between TLS fingerprints, HTTP/2 behavior, HTTP/3/QUIC parameters, and the browser surface exposed to JavaScript.
  • Python scrapers need browser impersonation, not just custom headers, because standard libraries like requests, httpx, and aiohttp do not emit real browser HTTP/2 or HTTP/3 fingerprints by default.
  • curl_cffi is the practical Python option when you need to match a browser fingerprint yourself, while Scrapfly is the simplest solution for teams that need browser-consistent fingerprinting across TLS, HTTP/2, and HTTP/3 without maintaining their own impersonation stack, because it keeps protocol fingerprints aligned and updated automatically.
  • Protocol fingerprints are a moving target, so browser changes like RFC 9218 priority behavior, SETTINGS updates, and HTTP/3 adoption can turn an old “working” profile into a fresh bot signal.
Get web scraping tips in your inboxTrusted by 100K+ developers and 30K+ enterprises. Unsubscribe anytime.

What is HTTP/2 Fingerprinting?

HTTP/2 fingerprinting is a passive detection technique that observes how a client establishes its HTTP/2 connection to infer what software is making the request. The technique works entirely on the server side, analyzing the connection setup frames before any page content is exchanged.

HTTP/2 fingerprinting has several key characteristics:

  • Identifies software, not users. Each HTTP client (Chrome, Firefox, curl, Python's httpx) produces distinct default values and frame sequences, revealing the software implementation behind a request
  • Consistent and predictable. These differences don't change between requests, making protocol analysis a reliable way to verify whether the claimed User-Agent matches the actual client
  • Pre-content detection. The fingerprint is generated during connection establishment , before the server processes any content request. First presented at Black Hat USA 2017 by Akamai researchers, it's now standard in commercial anti-bot products

For web scrapers, this is a critical problem even with perfect headers, HTTP/2 connection parameters can reveal the actual client library (e.g. Python's hyper-h2 or Go's net/http2), making it one of the strongest bot detection signals.

The Four Components of an HTTP/2 Fingerprint

When a client starts an HTTP/2 connection, it sends a series of frames that configure how the connection will work. Different browsers and libraries make different choices here, and those choices form four distinct signals that together create a reliable fingerprint. Let's walk through each one.

SETTINGS Frame Parameters

The very first thing a client does after opening an HTTP/2 connection is send a SETTINGS frame. Think of it as a handshake where the client says "here's how I'd like this connection to work." The HTTP/2 spec defines six standard parameters, but which ones a client actually sends, and in what order, varies across browsers and versions:

Parameter ID Chrome 119+ Notes
HEADER_TABLE_SIZE 1 65536 Always present
ENABLE_PUSH 2 0 Added in Chrome 119 (wasn't sent before)
MAX_CONCURRENT_STREAMS 3 not sent Was 1000 in Chrome 100-118, then dropped
INITIAL_WINDOW_SIZE 4 6291456 Default curl sends ~64 KB, a 100x difference
MAX_FRAME_SIZE 5 not sent Chrome doesn't send this at all
MAX_HEADER_LIST_SIZE 6 262144 Not all browsers include this

Notice that Chrome only sends four of the six parameters. Which parameters are absent is just as important as the values themselves. A scraper sending 3:1000 (old Chrome behavior) alongside a Chrome 130+ User-Agent is instantly flagged because the SETTINGS don't match that version.

It's not just the values that matter, but also the order they appear in. Chrome, Firefox, and Python libraries all send these parameters in a different sequence, and anti-bot services check both.

WINDOW_UPDATE Frame

Right after the SETTINGS frame, most clients send a WINDOW_UPDATE frame to expand the connection-level flow control window beyond the default 65535 bytes. The values are quite specific:

  • Chrome sends 15663105, setting a total window of ~15 MB
  • Firefox sends 12517377, landing at a different total
  • Non-browser clients often skip this frame entirely (recorded as 0 in the fingerprint)

Why do these differ so much? Browsers are optimized for loading pages with dozens of resources in parallel, so they request large windows. Libraries tend to stick with conservative defaults, which makes them stand out.

PRIORITY Frames (Deprecated)

The original HTTP/2 spec (RFC 7540) let clients build a dependency tree to prioritize streams. Each browser created its own distinctive tree structure, adding another layer to the fingerprint. RFC 9218 (June 2022) deprecated this system in favor of the Extensible Prioritization scheme.

Modern Chrome does not send separate PRIORITY frames at all. Instead, it sets priority via the HEADERS frame itself (weight=256, exclusive=1) and uses the priority: u=0, i HTTP header from RFC 9218. The fingerprint records 0 in the priority field, and that zero is the fingerprint.

Firefox historically sent explicit PRIORITY frames for multiple streams, so presence vs. absence is the distinguishing signal:

  • Chrome (modern): no PRIORITY frames (0 in fingerprint), uses RFC 9218 priority header instead
  • Firefox: still sends PRIORITY frames for stream dependencies
  • Scrapers: claiming to be modern Chrome but sending PRIORITY frames, or claiming Firefox but missing them, creates an instant mismatch

Pseudo-Header Ordering

Every HTTP/2 request starts with four pseudo-headers (:method, :authority, :scheme, :path) that replace the old HTTP/1.1 request line. The spec requires them before regular headers but doesn't specify their order, so each browser picks its own:

  • Chrome: :method, :authority, :scheme, :path (m,a,s,p)
  • Firefox: :method, :path, :authority, :scheme (m,p,a,s)
  • Safari: :method, :scheme, :path, :authority (m,s,p,a)

This is the easiest component to both detect and spoof. Still, most HTTP libraries in Python, Go, and other languages use an ordering that doesn't match any browser, making automated requests easy to spot.

Now that we've covered all four components, let's see how they're combined into a single fingerprint string that anti-bot services can compare and hash efficiently.

Reading an HTTP/2 Fingerprint String

The four fingerprint components are combined into a single string using a format popularized by Akamai's research. The format uses pipe characters to separate the four sections:

S[;]|WU|P[,]#|PS[,]

Each segment encodes one component:

  • S = SETTINGS parameters as ID:Value pairs, separated by semicolons. The order of pairs reflects the order in the SETTINGS frame
  • WU = WINDOW_UPDATE value (the increment sent on stream 0, or 0 if no WINDOW_UPDATE was sent)
  • P = PRIORITY frame fields as StreamID:Exclusivity:DependentStreamID:Weight entries, comma-separated for multiple PRIORITY frames. If no PRIORITY frames are sent, this section contains 0
  • PS = Pseudo-header order as single-letter codes (m = :method, a = :authority, s = :scheme, p = :path), comma-separated

Here's what Chrome 144's actual fingerprint looks like:

1:65536;2:0;4:6291456;6:262144|15663105|0|m,a,s,p

Breaking it down:

  • 1:65536;2:0;4:6291456;6:262144 : SETTINGS header_table=65536, push=disabled, window=6 MB, max_headers=256 KB. Notice parameter IDs 3 and 5 are absent (Chrome doesn't send MAX_CONCURRENT_STREAMS or MAX_FRAME_SIZE)
  • 15663105 : WINDOW_UPDATE: connection-level flow control increment
  • 0 No PRIORITY frames sent
  • m,a,s,p : Pseudo-header order: :method, :authority, :scheme, :path

Any anti-bot service comparing an httpx or curl default fingerprint against this will reject the connection instantly. The fingerprint string can be hashed (MD5 or SHA-256) for efficient database lookups, similar to how JA3 fingerprints are hashed for TLS identification.

HTTP/3 and QUIC Fingerprinting

HTTP/3 replaces the TCP transport used by HTTP/2 with QUIC, a UDP-based protocol that integrates transport and encryption into a single handshake. The shift to QUIC introduces a new set of fingerprintable parameters that complement existing HTTP/2 signals.

QUIC Transport Parameters

QUIC transport parameters work like HTTP/2 SETTINGS frames but at the transport layer. The key fingerprintable parameters include:

  • initial_max_data sets the connection-level flow control limit (Chrome uses 15 MB)
  • initial_max_stream_data_bidi_local/remote control per-stream flow control windows. Chrome uses 6291456, mirroring its HTTP/2 INITIAL_WINDOW_SIZE
  • initial_max_streams_bidi/uni limit concurrent streams, similar to HTTP/2's MAX_CONCURRENT_STREAMS
  • max_idle_timeout sets how long a connection can stay idle (Chrome uses 30000ms)
  • max_udp_payload_size caps the size of UDP datagrams the client will accept

Just like HTTP/2 SETTINGS, each QUIC implementation sends different defaults for these values. Additional signals like version negotiation behavior and connection ID length also contribute to the overall fingerprint.

How HTTP/3 Fingerprinting Differs from HTTP/2

Since HTTP/3 runs over QUIC (UDP) instead of TCP, the transport and TLS handshakes happen in a single round trip, meaning transport parameters and TLS data are exchanged simultaneously. This introduces new fingerprinting signals with no HTTP/2 equivalent:

  • 0-RTT behavior: QUIC lets returning clients send data in the very first reconnection packet. How a client handles this, including which parameters it caches, creates a unique behavioral fingerprint
  • Connection migration: QUIC connections can survive IP changes (e.g. WiFi to cellular). Whether a client supports this is implementation-specific and adds to the fingerprint
  • HTTP/3 upgrade capability: Most scraping libraries are HTTP/2-only. Real browsers upgrade to HTTP/3 when the server advertises support via Alt-Svc. A client that never upgrades stands out immediately

HTTP/3 fingerprinting is still maturing. Cloudflare and Google lead server-side adoption, but anti-bot products haven't caught up to HTTP/2-level detection yet. The direction is clear though: as HTTP/3 grows, expect QUIC transport parameters to join TLS and HTTP/2 as standard fingerprinting signals.

Now let's look at how anti-bot services actually use these fingerprints in production.

How Anti-Bot Services Use Protocol Fingerprints

Anti-bot services don't rely on a single signal. They stack multiple detection layers that cross-validate each other, and inconsistencies between layers trigger the strongest responses.

Multi-Layer Detection Stack

Modern anti-bot detection works across three layers:

  • TLS layer (JA3/JA4): The TLS ClientHello reveals cipher suites, extensions, and protocol versions. Chrome uses BoringSSL, Firefox uses NSS, Python uses OpenSSL, and each produces a distinct fingerprint
  • HTTP/2 layer: The connection parameters (SETTINGS, WINDOW_UPDATE, pseudo-headers) identify the HTTP stack. Using BoringSSL for TLS but sending Python's hyper-h2 HTTP/2 settings creates an obvious mismatch
  • Browser surface layer: JavaScript environment checks (navigator, canvas, WebGL) provide a third layer. Tools like Playwright and Puppeteer pass this layer but can still fail at TLS or HTTP/2 if the connection runs through a non-browser HTTP stack

All three layers must tell the same story. A real Chrome browser produces a BoringSSL TLS fingerprint, Chrome HTTP/2 SETTINGS, and a Chrome JS environment. When any layer contradicts another, that's the detection signal.

Cloudflare, Akamai, and DataDome

Cloudflare Bot Management matches observed TLS + HTTP/2 fingerprints against a database of known browser profiles. Mismatches or unknown fingerprints trigger JavaScript challenges or outright blocks. Processing billions of requests daily gives them a massive classification dataset.

Akamai Bot Manager pioneered commercial HTTP/2 fingerprinting after the 2017 Black Hat presentation. Their sensor script combines passive protocol analysis with active JavaScript challenges, generating an encoded payload that covers both layers.

DataDome combines protocol fingerprints with behavioral analysis, evaluating the full request lifecycle from connection establishment through navigation patterns.

Spoofing a single layer isn't enough. A scraper must produce consistent fingerprints across TLS, HTTP/2, and the browser surface. Now let's cover the practical tools and techniques for producing correct HTTP/2 fingerprints in Python.

Test Your Fingerprint with Scrapfly

Before trying to spoof anything, you need to know what your client actually looks like. Scrapfly offers free tools to inspect your fingerprint at each protocol layer:

scrapfly middleware

If your scraper is getting blocked, run it through these analyzers to spot the mismatch between your fingerprint and the browser you're impersonating.

FAQ

Is HTTP/2 fingerprinting the same as TLS fingerprinting?

No. TLS fingerprinting (JA3/JA4) analyzes the TLS handshake to identify the cryptographic library (BoringSSL, NSS, OpenSSL). HTTP/2 fingerprinting analyzes the connection setup frames (SETTINGS, WINDOW_UPDATE, PRIORITY) sent after TLS completes. Anti-bot services check both layers and flag mismatches between them.

Can I change my HTTP/2 fingerprint?

Yes, but not with standard libraries like requests, httpx, or aiohttp as they don't expose HTTP/2 SETTINGS or pseudo-header ordering. You need a browser impersonation library like curl_cffi (Python) or tls-client (Go), which let you select a target browser profile that configures all HTTP/2 parameters automatically.

Does HTTP/3 fingerprinting replace HTTP/2 fingerprinting?

No. HTTP/3 fingerprinting adds signals on top of HTTP/2 detection. HTTP/2 is still the dominant protocol, and most scraping libraries don't support HTTP/3 at all, which means the inability to upgrade is itself a detection signal.

What happens if my TLS and HTTP/2 fingerprints don't match?

This is the strongest detection signal available. If your TLS fingerprint says Chrome (BoringSSL) but your HTTP/2 parameters show Python's hyper-h2 defaults, anti-bot services will flag it immediately. Libraries like curl_cffi solve this by configuring both TLS and HTTP/2 to match the same browser profile.

How often do browser HTTP/2 fingerprints change?

Infrequently, but it happens. Chrome's removal of PRIORITY frames after RFC 9218 was a major change. Minor SETTINGS tweaks can occur with any browser update. If you manage your own impersonation profiles, monitor release notes. Services like Scrapfly update browser profiles automatically.

Summary

In this guide, we covered how HTTP/2 and HTTP/3 fingerprinting works across SETTINGS frames, WINDOW_UPDATE, PRIORITY, pseudo-header ordering, QUIC transport parameters, and GREASE signals, and how anti-bot services like Cloudflare, Akamai, and DataDome combine these with TLS fingerprinting into a multi-layer detection stack.

The key insight is that header spoofing alone isn't enough. Anti-bot systems cross-validate your TLS fingerprint, HTTP/2 connection parameters, and browser surface to catch inconsistencies. To avoid detection, every layer must tell the same story.

For practical bypass, browser impersonation libraries like curl_cffi let you match a real browser's fingerprint across both TLS and HTTP/2 layers. And if you'd rather skip the fingerprint management entirely, Scrapfly's web scraping API handles all protocol-level fingerprinting automatically, keeping browser profiles up to date as new versions are released.

Scale Your Web Scraping
Anti-bot bypass, browser rendering, and rotating proxies — all in one API. Start with 1,000 free credits.
No credit card required 1,000 free API credits Anti-bot bypass included
Not ready? Get our newsletter instead.