How to fix Python requests TooManyRedirects error?

by scrapecrow Dec 19, 2022

TooManyRedirects error can be seen when using Python requests module to scrape websites with incorrectly configured redirects:

import requests

requests.get("https://httpbin.dev/redirect/31")  # default redirect limit is 30
# will raise:
# TooManyRedirects(requests.exceptions.TooManyRedirects: Exceeded 30 redirects.

# we can set max redirects using requests.Session:
session = requests.Session()
session.max_redirects = 2
session.get("https://httpbin.dev/redirect/3")

When web scraping, this usually means one of 3 things:

  • The website is incorrectly configured.
  • Our requests are missing important details like headers or cookies.
  • The scraper is purposefully redirected in a loop to prevent scraping (i.e. blocking).

To handle ToomanyRedirects exception we should disable automatic redirects and handle them manually:

import requests

session = requests.Session()
response = session.get("https://httpbin.dev/redirect/3", allow_redirects=False)
redirect_url = response.headers['Location']
# now we can manually inspect and fix the redirect url if necessary and then follow it:
response2 = session.get(redirect_url, allow_redirects=False)

Related Articles

Guide to Python requests POST method

Discover how to use Python's requests library for POST requests, including JSON, form data, and file uploads, along with response handling tips.

PYTHON
REQUESTS
HTTP
Guide to Python requests POST method

Guide to Python Requests Headers

Our guide to request headers for Python requests library. How to configure and what do they mean.

PYTHON
REQUESTS
HTTP
Guide to Python Requests Headers

How to Scrape YouTube in 2025

Learn how to scrape YouTube, channel, video, and comment data using Python directly in JSON.

SCRAPEGUIDE
PYTHON
HIDDEN-API
How to Scrape YouTube in 2025

Guide to List Crawling: Everything You Need to Know

In-depth look at list crawling - how to extract valuable data from list-formatted content like tables, listicles and paginated pages.

CRAWLING
BEAUTIFULSOUP
PYTHON
Guide to List Crawling: Everything You Need to Know

How to Find All URLs on a Domain

Learn how to efficiently find all URLs on a domain using Python and web crawling. Guide on how to crawl entire domain to collect all website data

CRAWLING
PYTHON
How to Find All URLs on a Domain

How to Capture and Convert a Screenshot to PDF

Quick guide on how to effectively capture web screenshots as PDF documents

SCREENSHOTS
PYTHON
NODEJS
How to Capture and Convert a Screenshot to PDF