How to fix Python requests SSLError?

SSLError error is seen when using Python requests module to scrape pages with untrusted SSL certificates.

import requests
response = requests.get("")
# raises: 
# SSLError: HTTPConnectionPool(host='', port=80)...

# we can disable certificate verficiation (note: dangerous as it disables e2e encryption)
response = requests.get("", verify=False)
# or specify certficate file explicitly (.rem)
cert_location = "certificates/example-com-certificate.pem"
response = requests.get("", verify=cert_location)

The SSLError exception is rarely encountered in web scraping but the easiest way to fix it is to simply disable certification verification (verify=False parameter) if no sensitive data is being exchanged.

Note if manual fix is required the SSL certificates requests is using can be found using the requests.certs.where() method:

import requests
'/etc/ssl/certs/ca-certificates.crt'  # example on Linux

This value can also be overridden using REQUESTS_CA_BUNDLE environment variable:

$ export REQUESTS_CA_BUNDLE="/etc/ssl/certs/my-certificates.pem" 
$ python -c "import requests;print(requests.certs.where())"

Finally, requests is using certifi to handle all SLL certificate-related operations - try updating it: pip install certifi --upgrade

Provided by Scrapfly

This knowledgebase is provided by Scrapfly — a web scraping API that allows you to scrape any website without getting blocked and implements a dozens of other web scraping conveniences. Check us out 👇