     [Blog](https://scrapfly.io/blog)   /  [blocking](https://scrapfly.io/blog/tag/blocking)   /  [HTTP/2 and HTTP/3 Fingerprinting: Protocol-Level Bot Detection](https://scrapfly.io/blog/posts/http2-http3-fingerprinting-guide)   # HTTP/2 and HTTP/3 Fingerprinting: Protocol-Level Bot Detection

 by [Ziad Shamndy](https://scrapfly.io/blog/author/ziad) Apr 18, 2026 13 min read [\#blocking](https://scrapfly.io/blog/tag/blocking) [\#http](https://scrapfly.io/blog/tag/http) [\#python](https://scrapfly.io/blog/tag/python) [\#tools](https://scrapfly.io/blog/tag/tools) 

 [  ](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhttp2-http3-fingerprinting-guide "Share on LinkedIn")    

 

 

         

   **Web Scraping API**Scrape any website with anti-bot bypass, proxy rotation, and JS rendering.

 

 [ Learn More  ](https://scrapfly.io/products/web-scraping-api) [  Docs ](https://scrapfly.io/docs/scrape-api/getting-started) 

 

 

When your scraper opens an HTTP/2 connection, the protocol settings it sends are as identifiable as a fingerprint. Anti-bot services read these signals before any page content is exchanged, and no amount of header spoofing can hide what the protocol layer exposes.

In this guide, we will break down how HTTP/2 fingerprinting works across its four core components, how HTTP/3 and QUIC introduce new detection vectors, and how major anti-bot vendors like Cloudflare, Akamai, and DataDome combine protocol fingerprints into a multi-layer detection stack.

## Key Takeaways

- **HTTP/2 and HTTP/3 fingerprinting identify scrapers at the connection layer**, using SETTINGS frames, flow-control values, pseudo-header ordering, and QUIC transport parameters that most HTTP clients do not match to real browsers.
- **Header spoofing is not enough**: a request that claims Chrome or Firefox can still be flagged before any page content loads if its protocol fingerprint does not match the browser it claims to be.
- **The strongest detection signal is a cross-layer mismatch** between TLS fingerprints, HTTP/2 behavior, HTTP/3/QUIC parameters, and the browser surface exposed to JavaScript.
- **Python scrapers need browser impersonation, not just custom headers**, because standard libraries like `requests`, `httpx`, and `aiohttp` do not emit real browser HTTP/2 or HTTP/3 fingerprints by default.
- **`curl_cffi` is the practical Python option when you need to match a browser fingerprint yourself**, while **Scrapfly is the simplest solution for teams that need browser-consistent fingerprinting across TLS, HTTP/2, and HTTP/3 without maintaining their own impersonation stack**, because it keeps protocol fingerprints aligned and updated automatically.
- **Protocol fingerprints are a moving target**, so browser changes like RFC 9218 priority behavior, SETTINGS updates, and HTTP/3 adoption can turn an old “working” profile into a fresh bot signal.

**Get web scraping tips in your inbox**Trusted by 100K+ developers and 30K+ enterprises. Unsubscribe anytime.







## What is HTTP/2 Fingerprinting?

HTTP/2 fingerprinting is a passive detection technique that observes how a client establishes its HTTP/2 connection to infer what software is making the request. The technique works entirely on the server side, analyzing the connection setup frames before any page content is exchanged.

HTTP/2 fingerprinting has several key characteristics:

- **Identifies software, not users.** Each HTTP client (Chrome, Firefox, curl, Python's [httpx](https://www.python-httpx.org/)) produces distinct default values and frame sequences, revealing the software implementation behind a request
- **Consistent and predictable.** These differences don't change between requests, making protocol analysis a reliable way to verify whether the claimed User-Agent matches the actual client
- **Pre-content detection.** The fingerprint is generated during connection establishment , before the server processes any content request. First presented at Black Hat USA 2017 by Akamai researchers, it's now standard in commercial anti-bot products

For web scrapers, this is a critical problem even with perfect headers, HTTP/2 connection parameters can reveal the actual client library (e.g. Python's hyper-h2 or Go's net/http2), making it one of the strongest bot detection signals.

[How TLS Fingerprint is Used to Block Web Scrapers?TLS fingeprinting is a popular way to identify web scrapers that not many developers are aware of. What is it and how can we fortify our scrapers to avoid being detected?](https://scrapfly.io/blog/posts/how-to-avoid-web-scraping-blocking-tls)



## The Four Components of an HTTP/2 Fingerprint

When a client starts an HTTP/2 connection, it sends a series of frames that configure how the connection will work. Different browsers and libraries make different choices here, and those choices form four distinct signals that together create a reliable fingerprint. Let's walk through each one.

### SETTINGS Frame Parameters

The very first thing a client does after opening an HTTP/2 connection is send a SETTINGS frame. Think of it as a handshake where the client says "here's how I'd like this connection to work." The HTTP/2 spec defines six standard parameters, but which ones a client actually sends, and in what order, varies across browsers and versions:

| Parameter | ID | Chrome 119+ | Notes |
|---|---|---|---|
| HEADER\_TABLE\_SIZE | 1 | 65536 | Always present |
| ENABLE\_PUSH | 2 | 0 | Added in Chrome 119 (wasn't sent before) |
| MAX\_CONCURRENT\_STREAMS | 3 | *not sent* | Was 1000 in Chrome 100-118, then dropped |
| INITIAL\_WINDOW\_SIZE | 4 | 6291456 | Default curl sends ~64 KB, a 100x difference |
| MAX\_FRAME\_SIZE | 5 | *not sent* | Chrome doesn't send this at all |
| MAX\_HEADER\_LIST\_SIZE | 6 | 262144 | Not all browsers include this |

Notice that Chrome only sends four of the six parameters. Which parameters are *absent* is just as important as the values themselves. A scraper sending `3:1000` (old Chrome behavior) alongside a Chrome 130+ User-Agent is instantly flagged because the SETTINGS don't match that version.

It's not just the values that matter, but also the *order* they appear in. Chrome, Firefox, and Python libraries all send these parameters in a different sequence, and anti-bot services check both.

### WINDOW\_UPDATE Frame

Right after the SETTINGS frame, most clients send a WINDOW\_UPDATE frame to expand the connection-level flow control window beyond the default 65535 bytes. The values are quite specific:

- **Chrome** sends 15663105, setting a total window of ~15 MB
- **Firefox** sends 12517377, landing at a different total
- **Non-browser clients** often skip this frame entirely (recorded as 0 in the fingerprint)

Why do these differ so much? Browsers are optimized for loading pages with dozens of resources in parallel, so they request large windows. Libraries tend to stick with conservative defaults, which makes them stand out.

### PRIORITY Frames (Deprecated)

The original HTTP/2 spec (RFC 7540) let clients build a dependency tree to prioritize streams. Each browser created its own distinctive tree structure, adding another layer to the fingerprint. RFC 9218 (June 2022) deprecated this system in favor of the Extensible Prioritization scheme.

Modern Chrome does *not* send separate PRIORITY frames at all. Instead, it sets priority via the HEADERS frame itself (weight=256, exclusive=1) and uses the `priority: u=0, i` HTTP header from RFC 9218. The fingerprint records `0` in the priority field, and that zero *is* the fingerprint.

Firefox historically sent explicit PRIORITY frames for multiple streams, so presence vs. absence is the distinguishing signal:

- **Chrome (modern)**: no PRIORITY frames (`0` in fingerprint), uses RFC 9218 `priority` header instead
- **Firefox**: still sends PRIORITY frames for stream dependencies
- **Scrapers**: claiming to be modern Chrome but sending PRIORITY frames, or claiming Firefox but missing them, creates an instant mismatch

### Pseudo-Header Ordering

Every HTTP/2 request starts with four pseudo-headers (`:method`, `:authority`, `:scheme`, `:path`) that replace the old HTTP/1.1 request line. The spec requires them before regular headers but doesn't specify their order, so each browser picks its own:

- **Chrome**: `:method`, `:authority`, `:scheme`, `:path` (`m,a,s,p`)
- **Firefox**: `:method`, `:path`, `:authority`, `:scheme` (`m,p,a,s`)
- **Safari**: `:method`, `:scheme`, `:path`, `:authority` (`m,s,p,a`)

This is the easiest component to both detect and spoof. Still, most HTTP libraries in Python, Go, and other languages use an ordering that doesn't match any browser, making automated requests easy to spot.

Now that we've covered all four components, let's see how they're combined into a single fingerprint string that anti-bot services can compare and hash efficiently.



## Reading an HTTP/2 Fingerprint String

The four fingerprint components are combined into a single string using a format popularized by Akamai's research. The format uses pipe characters to separate the four sections:

```
S[;]|WU|P[,]#|PS[,]
```



Each segment encodes one component:

- **S** = SETTINGS parameters as `ID:Value` pairs, separated by semicolons. The order of pairs reflects the order in the SETTINGS frame
- **WU** = WINDOW\_UPDATE value (the increment sent on stream 0, or 0 if no WINDOW\_UPDATE was sent)
- **P** = PRIORITY frame fields as `StreamID:Exclusivity:DependentStreamID:Weight` entries, comma-separated for multiple PRIORITY frames. If no PRIORITY frames are sent, this section contains `0`
- **PS** = Pseudo-header order as single-letter codes (m = :method, a = :authority, s = :scheme, p = :path), comma-separated

Here's what Chrome 144's actual fingerprint looks like:

```
1:65536;2:0;4:6291456;6:262144|15663105|0|m,a,s,p
```



Breaking it down:

- `1:65536;2:0;4:6291456;6:262144` : SETTINGS header\_table=65536, push=disabled, window=6 MB, max\_headers=256 KB. Notice parameter IDs 3 and 5 are absent (Chrome doesn't send MAX\_CONCURRENT\_STREAMS or MAX\_FRAME\_SIZE)
- `15663105` : WINDOW\_UPDATE: connection-level flow control increment
- `0` No PRIORITY frames sent
- `m,a,s,p` : Pseudo-header order: :method, :authority, :scheme, :path

Any anti-bot service comparing an httpx or curl default fingerprint against this will reject the connection instantly. The fingerprint string can be hashed (MD5 or SHA-256) for efficient database lookups, similar to how JA3 fingerprints are hashed for TLS identification.

## HTTP/3 and QUIC Fingerprinting

HTTP/3 replaces the TCP transport used by HTTP/2 with QUIC, a UDP-based protocol that integrates transport and encryption into a single handshake. The shift to QUIC introduces a new set of fingerprintable parameters that complement existing HTTP/2 signals.

### QUIC Transport Parameters

QUIC transport parameters work like HTTP/2 SETTINGS frames but at the transport layer. The key fingerprintable parameters include:

- **initial\_max\_data** sets the connection-level flow control limit (Chrome uses 15 MB)
- **initial\_max\_stream\_data\_bidi\_local/remote** control per-stream flow control windows. Chrome uses 6291456, mirroring its HTTP/2 INITIAL\_WINDOW\_SIZE
- **initial\_max\_streams\_bidi/uni** limit concurrent streams, similar to HTTP/2's MAX\_CONCURRENT\_STREAMS
- **max\_idle\_timeout** sets how long a connection can stay idle (Chrome uses 30000ms)
- **max\_udp\_payload\_size** caps the size of UDP datagrams the client will accept

Just like HTTP/2 SETTINGS, each QUIC implementation sends different defaults for these values. Additional signals like version negotiation behavior and connection ID length also contribute to the overall fingerprint.

### How HTTP/3 Fingerprinting Differs from HTTP/2

Since HTTP/3 runs over QUIC (UDP) instead of TCP, the transport and TLS handshakes happen in a single round trip, meaning transport parameters and TLS data are exchanged simultaneously. This introduces new fingerprinting signals with no HTTP/2 equivalent:

- **0-RTT behavior:** QUIC lets returning clients send data in the very first reconnection packet. How a client handles this, including which parameters it caches, creates a unique behavioral fingerprint
- **Connection migration:** QUIC connections can survive IP changes (e.g. WiFi to cellular). Whether a client supports this is implementation-specific and adds to the fingerprint
- **HTTP/3 upgrade capability:** Most scraping libraries are HTTP/2-only. Real browsers upgrade to HTTP/3 when the server advertises support via `Alt-Svc`. A client that never upgrades stands out immediately

HTTP/3 fingerprinting is still maturing. Cloudflare and Google lead server-side adoption, but anti-bot products haven't caught up to HTTP/2-level detection yet. The direction is clear though: as HTTP/3 grows, expect QUIC transport parameters to join TLS and HTTP/2 as standard fingerprinting signals.

Now let's look at how anti-bot services actually use these fingerprints in production.



Scrapfly

#### Need to bypass anti-bot protection?

Scrapfly's Anti-Scraping Protection handles Cloudflare, DataDome, and more — automatically.

[Try Free →](https://scrapfly.io/register)## How Anti-Bot Services Use Protocol Fingerprints

Anti-bot services don't rely on a single signal. They stack multiple detection layers that cross-validate each other, and inconsistencies between layers trigger the strongest responses.

### Multi-Layer Detection Stack

Modern anti-bot detection works across three layers:

- **TLS layer (JA3/JA4):** The TLS ClientHello reveals cipher suites, extensions, and protocol versions. Chrome uses BoringSSL, Firefox uses NSS, Python uses OpenSSL, and each produces a distinct fingerprint
- **HTTP/2 layer:** The connection parameters (SETTINGS, WINDOW\_UPDATE, pseudo-headers) identify the HTTP stack. Using BoringSSL for TLS but sending Python's hyper-h2 HTTP/2 settings creates an obvious mismatch
- **Browser surface layer:** JavaScript environment checks (navigator, canvas, WebGL) provide a third layer. Tools like [Playwright](https://playwright.dev/) and [Puppeteer](https://pptr.dev/) pass this layer but can still fail at TLS or HTTP/2 if the connection runs through a non-browser HTTP stack

All three layers must tell the same story. A real Chrome browser produces a BoringSSL TLS fingerprint, Chrome HTTP/2 SETTINGS, and a Chrome JS environment. When any layer contradicts another, that's the detection signal.

### Cloudflare, Akamai, and DataDome

**Cloudflare Bot Management** matches observed TLS + HTTP/2 fingerprints against a database of known browser profiles. Mismatches or unknown fingerprints trigger JavaScript challenges or outright blocks. Processing billions of requests daily gives them a massive classification dataset.

[How to Bypass Cloudflare When Web Scraping in 2026Cloudflare offers one of the most popular anti scraping service, so in this article we'll take a look how it works and how to bypass it.](https://scrapfly.io/blog/posts/how-to-bypass-cloudflare-anti-scraping)

**Akamai Bot Manager** pioneered commercial HTTP/2 fingerprinting after the 2017 Black Hat presentation. Their sensor script combines passive protocol analysis with active JavaScript challenges, generating an encoded payload that covers both layers.

[How to Bypass Akamai when Web Scraping in 2026In this article we'll take a look at a popular anti bot service Akamai Bot Manager. How does it detect web scrapers and bots and what can we do to prevent our scrapers from being detected?](https://scrapfly.io/blog/posts/how-to-bypass-akamai-anti-scraping)

**DataDome** combines protocol fingerprints with behavioral analysis, evaluating the full request lifecycle from connection establishment through navigation patterns.

[How to Bypass Datadome Anti Scraping in 2026Learn how Datadome detects web scrapers using TLS, IP, and ML analysis, and discover practical bypass techniques and tools for 2026.](https://scrapfly.io/blog/posts/how-to-bypass-datadome-anti-scraping)

Spoofing a single layer isn't enough. A scraper must produce consistent fingerprints across TLS, HTTP/2, and the browser surface. Now let's cover the practical tools and techniques for producing correct HTTP/2 fingerprints in Python.



## Test Your Fingerprint with Scrapfly

Before trying to spoof anything, you need to know what your client actually looks like. Scrapfly offers free tools to inspect your fingerprint at each protocol layer:

- [HTTP/2 Fingerprint Analyzer](https://scrapfly.io/web-scraping-tools/http2-fingerprint) shows your SETTINGS, WINDOW\_UPDATE, and pseudo-header ordering, and compares them against known browser profiles
- [JA3/JA4 TLS Fingerprint Checker](https://scrapfly.io/web-scraping-tools/ja3-fingerprint) analyzes your TLS ClientHello and generates JA3, JA3N, and JA4 hashes, matching against 125,000+ real browser samples
- [HTTP/3 &amp; QUIC Fingerprint Analyzer](https://scrapfly.io/web-scraping-tools/http3-quic-fingerprint) captures your QUIC transport parameters and HTTP/3 SETTINGS, useful for checking if your client can even negotiate the protocol



If your scraper is getting blocked, run it through these analyzers to spot the mismatch between your fingerprint and the browser you're impersonating.



## FAQ

Is HTTP/2 fingerprinting the same as TLS fingerprinting?No. TLS fingerprinting (JA3/JA4) analyzes the TLS handshake to identify the cryptographic library (BoringSSL, NSS, OpenSSL). HTTP/2 fingerprinting analyzes the connection setup frames (SETTINGS, WINDOW\_UPDATE, PRIORITY) sent *after* TLS completes. Anti-bot services check both layers and flag mismatches between them.







Can I change my HTTP/2 fingerprint?Yes, but not with standard libraries like requests, httpx, or aiohttp as they don't expose HTTP/2 SETTINGS or pseudo-header ordering. You need a browser impersonation library like curl\_cffi (Python) or tls-client (Go), which let you select a target browser profile that configures all HTTP/2 parameters automatically.







Does HTTP/3 fingerprinting replace HTTP/2 fingerprinting?No. HTTP/3 fingerprinting *adds* signals on top of HTTP/2 detection. HTTP/2 is still the dominant protocol, and most scraping libraries don't support HTTP/3 at all, which means the inability to upgrade is itself a detection signal.







What happens if my TLS and HTTP/2 fingerprints don't match?This is the strongest detection signal available. If your TLS fingerprint says Chrome (BoringSSL) but your HTTP/2 parameters show Python's hyper-h2 defaults, anti-bot services will flag it immediately. Libraries like curl\_cffi solve this by configuring both TLS *and* HTTP/2 to match the same browser profile.







How often do browser HTTP/2 fingerprints change?Infrequently, but it happens. Chrome's removal of PRIORITY frames after RFC 9218 was a major change. Minor SETTINGS tweaks can occur with any browser update. If you manage your own impersonation profiles, monitor release notes. Services like Scrapfly update browser profiles automatically.









## Summary

In this guide, we covered how HTTP/2 and HTTP/3 fingerprinting works across SETTINGS frames, WINDOW\_UPDATE, PRIORITY, pseudo-header ordering, QUIC transport parameters, and GREASE signals, and how anti-bot services like Cloudflare, Akamai, and DataDome combine these with TLS fingerprinting into a multi-layer detection stack.

The key insight is that header spoofing alone isn't enough. Anti-bot systems cross-validate your TLS fingerprint, HTTP/2 connection parameters, and browser surface to catch inconsistencies. To avoid detection, every layer must tell the same story.

For practical bypass, browser impersonation libraries like curl\_cffi let you match a real browser's fingerprint across both TLS and HTTP/2 layers. And if you'd rather skip the fingerprint management entirely, [Scrapfly's web scraping API](https://scrapfly.io/) handles all protocol-level fingerprinting automatically, keeping browser profiles up to date as new versions are released.



 

    Table of Contents- [Key Takeaways](#key-takeaways)
- [What is HTTP/2 Fingerprinting?](#what-is-http-2-fingerprinting)
- [The Four Components of an HTTP/2 Fingerprint](#the-four-components-of-an-http-2-fingerprint)
- [SETTINGS Frame Parameters](#settings-frame-parameters)
- [WINDOW\_UPDATE Frame](#window-update-frame)
- [PRIORITY Frames (Deprecated)](#priority-frames-deprecated)
- [Pseudo-Header Ordering](#pseudo-header-ordering)
- [Reading an HTTP/2 Fingerprint String](#reading-an-http-2-fingerprint-string)
- [HTTP/3 and QUIC Fingerprinting](#http-3-and-quic-fingerprinting)
- [QUIC Transport Parameters](#quic-transport-parameters)
- [How HTTP/3 Fingerprinting Differs from HTTP/2](#how-http-3-fingerprinting-differs-from-http-2)
- [How Anti-Bot Services Use Protocol Fingerprints](#how-anti-bot-services-use-protocol-fingerprints)
- [Multi-Layer Detection Stack](#multi-layer-detection-stack)
- [Cloudflare, Akamai, and DataDome](#cloudflare-akamai-and-datadome)
- [Test Your Fingerprint with Scrapfly](#test-your-fingerprint-with-scrapfly)
- [FAQ](#faq)
- [Summary](#summary)
 
    Join the Newsletter  Get monthly web scraping insights 

 

  



Scale Your Web Scraping

Anti-bot bypass, browser rendering, and rotating proxies, all in one API. Start with 1,000 free credits.

  No credit card required  1,000 free API credits  Anti-bot bypass included 

 [Start Free](https://scrapfly.io/register) [View Docs](https://scrapfly.io/docs/onboarding) 

 Not ready? Get our newsletter instead. 

 

## Explore this Article with AI

 [ ChatGPT ](https://chat.openai.com/?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhttp2-http3-fingerprinting-guide) [ Gemini ](https://www.google.com/search?udm=50&aep=11&q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhttp2-http3-fingerprinting-guide) [ Grok ](https://x.com/i/grok?text=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhttp2-http3-fingerprinting-guide) [ Perplexity ](https://www.perplexity.ai/search/new?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhttp2-http3-fingerprinting-guide) [ Claude ](https://claude.ai/new?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhttp2-http3-fingerprinting-guide) 



 ## Related Articles

 [  

 blocking 

### What is CreepJS Browser Fingerprint and How to Bypass It

In this article, we will explore the inner workings of CreepJS, one of the prominent browser fingerprinting tools and ho...

 

 ](https://scrapfly.io/blog/posts/browser-fingerprinting-with-creepjs) [     

 blocking tools 

### How Browser Fingerprinting Works and How to Defend Against It

Learn how browser fingerprinting works, from canvas to WebGPU, and discover developer-focused techniques to bypass detec...

 

 ](https://scrapfly.io/blog/posts/how-browser-fingerprinting-works) [  

 blocking 

### How to Bypass Cloudflare When Web Scraping in 2026

Cloudflare offers one of the most popular anti scraping service, so in this article we'll take a look how it works and h...

 

 ](https://scrapfly.io/blog/posts/how-to-bypass-cloudflare-anti-scraping) 

  ## Related Questions

- [ Q How to pass custom parameters to scrapy spiders? ](https://scrapfly.io/blog/answers/how-to-pass-parameters-to-scrapy-spiders-cli)
 
  



   



 Bypass anti-bot protection automatically, **1,000 free credits** [Start Free](https://scrapfly.io/register)