How to capture background requests and responses in Selenium?

Selenium doesn't have a request interception functionality out of the box but we can enable it using selenium-wire extension.

Capturing background requests can be an important step of a web scraping process and this area is where Selenium is lacking, so let's take a look how to use Selenium extension - selenium-wire to implement this vital feature.

To start selenium-wire can be installed using pip install selenium-wire command. Then all requests are captured automatically and stored in driver.request variable:

from seleniumwire import webdriver  # Import from seleniumwire
from import expected_conditions as EC
from import By
from import WebDriverWait

driver = webdriver.Chrome()
# wait for element to appear and click it to trigger background requests
element = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.ID, 'load-more-reviews')))
# Access requests via the `requests` attribute
for request in driver.requests:
    if request.response:

Often these background requests can contain important dynamic data and using this capturing technique is an easy way to scrape it. For more see our web scraping background requests guide.

Question tagged: Selenium

Related Posts

How to Use Chrome Extensions with Playwright, Puppeteer and Selenium

In this article, we'll explore different useful Chrome extensions for web scraping. We'll also explain how to install Chrome extensions with various headless browser libraries, such as Selenium, Playwright and Puppeteer.

Intro to Web Scraping using Selenium Grid

In this guide, you will learn about installing and configuring Selenium Grid with Docker and how to use it for web scraping at scale.

How to Scrape Google Maps

We'll take a look at to find businesses through Google Maps search system and how to scrape their details using either Selenium, Playwright or ScrapFly's javascript rendering feature - all of that in Python.