How to block resources in Selenium and Python?

To speed up Selenium web scrapers we can block media and other non-essential background requests.

Unfortunately, Selenium by itself doesn't support request interception and blocking so we must use a proxy to handle the blocking for us, then attach this proxy to our Selenium instance.

For example, a popular proxy for such use case is mitproxy. We can easily configure it to block requests by resource type or by resource name.

First install mitmproxy using pip install mitmproxy or package manager available in your operating system. Then, we can create a simple block.py script that will extend mitmproxy with our custom blocking logic:

# block.py
from mitmproxy import http

# we can block popular 3rd party resources like tracking and advertisements.
BLOCK_RESOURCE_NAMES = [
  'adzerk',
  'analytics',
  'cdn.api.twitter',
  'doubleclick',
  'exelator',
  'facebook',
  'fontawesome',
  'google',
  'google-analytics',
  'googletagmanager',
  # or something abstract like images
  'images'
]
# or block based on resource extension
BLOCK_RESOURCE_EXTENSIONS = [
    '.gif',
    '.jpg',
    '.jpeg',
    '.png',
    '.webp',
]

# this will handle all requests going through proxy:
def request(flow: http.HTTPFlow) -> None:
    url = flow.request.pretty_url
    has_blocked_extension = any(url.endswith(ext) for ext in BLOCK_RESOURCE_EXTENSIONS)
    contains_blocked_key = any(block in url for block in BLOCK_RESOURCE_NAMES)
    if has_blocked_extension or contains_blocked_key:
        print(f"Blocked {url}")
        flow.response = http.Response.make(
            404,  # status code
            b"Blocked",  # content
            {"Content-Type": "text/html"}  # headers
        )

We can run this proxy using mitmproxy -s block.py and it'll start a proxy on localhost:8080 on our machine.

Now, we can attach this proxy to our Selenium instance and it'll block all unwanted requests going through it:

from selenium import webdriver

PROXY = "localhost:8080"  # IP:PORT or HOST:PORT of our mitmproxy

chrome_options = webdriver.ChromeOptions()
# this command enabled proxy for our Selenium browser:
chrome_options.add_argument('--proxy-server=%s' % PROXY)

chrome = webdriver.Chrome(options=chrome_options)
# test it by going to a page with blocked resources:
chrome.get("https://web-scraping.dev/product/1")
chrome.quit()

Using this method to block resources can significantly reduce the bandwidth used by the Selenium scraper - often by 2-10 times! This will also greatly speed up scraping as the browser doesn't need to render unnecessary resources.

🤖 Tip: to use mitmproxy with Selenium and https websites the mitmproxy certificate needs to be installed for that see how to install mitmproxy certificate

Question tagged: Selenium

Related Posts

How to Use Chrome Extensions with Playwright, Puppeteer and Selenium

In this article, we'll explore different useful Chrome extensions for web scraping. We'll also explain how to install Chrome extensions with various headless browser libraries, such as Selenium, Playwright and Puppeteer.

Intro to Web Scraping using Selenium Grid

In this guide, you will learn about installing and configuring Selenium Grid with Docker and how to use it for web scraping at scale.

How to Scrape Google Maps

We'll take a look at to find businesses through Google Maps search system and how to scrape their details using either Selenium, Playwright or ScrapFly's javascript rendering feature - all of that in Python.