How to capture background requests and responses in Puppeteer?

by scrapecrow Oct 31, 2022

When web scraping, it's often useful to monitor network requests. This enables retrieving crucial values found in response headers, body, or even cookies.

To better illustrate this, let's see what the request interception actually looks like using the below steps:

After following the above steps, you will find each request event is captured, including its response details:

background request as seen in chrome devtools

Above, we can observe the full details of the outgoing request. These details can be parsed to extract specific request-response values.


To allow Puppeteer get network requests and responses, we can use the page.on() method. This callback allows the headless browser to interept all network calls:

const puppeteer = require("puppeteer");

async function run() {
  const browser = await puppeteer.launch();
  const page = await browser.newPage();
  // enable request interception
  await page.setRequestInterception(true);

  // capture background requests
  page.on("request", (request) => {
    console.log(request);
    if (request.resourceType() === "xhr") {
      console.log(request);
      // we can block these requests with:
      request.abort();
    } else {
      request.continue();
    }
  });

  // capture background responses
  page.on("response", (response) => {
    console.log(response);
  });

  // request target web page
  await page.goto("https://web-scraping.dev/");
  await page.waitForSelector("footer", { visible: true });

  await browser.close();
}

run();

Above, we allow Puppeteer capture background requests using the setRequestInterception method. Often, these background requests can contain important dynamic data. Blocking some requests can also reduce the bandwidth used while scraping, see our guide on blocking resources in Puppeteer for more.

How to Web Scrape with Puppeteer and NodeJS in 2025

Introduction to using Puppeteer in Nodejs for web scraping dynamic web pages and web apps. Tips and tricks, best practices and example project.

How to Web Scrape with Puppeteer and NodeJS in 2025

Related Articles

Bypass Proxy Detection with Browser Fingerprint Impersonation

Stop proxy blocks with browser fingerprint impersonation using this guide for Playwright, Selenium, curl-impersonate & Scrapfly

PROXIES
SELENIUM
PLAYWRIGHT
PUPPETEER
BLOCKING
Bypass Proxy Detection with Browser Fingerprint Impersonation

What is a Headless Browser? Top 5 Headless Browser Tools

Quick overview of new emerging tech of browser automation - what exactly are these tools and how are they used in web scraping?

HEADLESS-BROWSER
PLAYWRIGHT
SELENIUM
PUPPETEER
What is a Headless Browser? Top 5 Headless Browser Tools

How to Scrape With Headless Firefox

Discover how to use headless Firefox with Selenium, Playwright, and Puppeteer for web scraping, including practical examples for each library.

HEADLESS-BROWSER
PUPPETEER
SELENIUM
NODEJS
PLAYWRIGHT
PYTHON
How to Scrape With Headless Firefox

How to use Headless Chrome Extensions for Web Scraping

In this article, we'll explore different useful Chrome extensions for web scraping. We'll also explain how to install Chrome extensions with various headless browser libraries, such as Selenium, Playwright and Puppeteer.

PYTHON
NODEJS
TOOLS
PLAYWRIGHT
PUPPETEER
SELENIUM
How to use Headless Chrome Extensions for Web Scraping

How to Web Scrape with Puppeteer and NodeJS in 2025

Introduction to using Puppeteer in Nodejs for web scraping dynamic web pages and web apps. Tips and tricks, best practices and example project.

PUPPETEER
FRAMEWORK
NODEJS
HEADLESS-BROWSER
DATA-PARSING
How to Web Scrape with Puppeteer and NodeJS in 2025

How to Scrape Dynamic Websites Using Headless Web Browsers

Introduction to using web automation tools such as Puppeteer, Playwright, Selenium and ScrapFly to render dynamic websites for web scraping

HEADLESS-BROWSER
PYTHON
SELENIUM
PUPPETEER
PLAYWRIGHT
How to Scrape Dynamic Websites Using Headless Web Browsers