How to use cURL in Python?

cURL is a popular HTTP client tool and a C library (libcurl). It can also be used in Python through many wrapper libraries.

The most popular library that uses libcurl in Python is pycurl. Here's an example use:

import pycurl
from io import BytesIO

# Set the URL you want to fetch
url = 'https://www.example.com/'

# Create a new Curl object
curl = pycurl.Curl()

# Set the URL and other options
curl.setopt(pycurl.URL, url)
# Follow redirects
curl.setopt(pycurl.FOLLOWLOCATION, 1)
# Set the user agent
curl.setopt(pycurl.USERAGENT, 'Mozilla/5.0')

# Create a buffer to store the response and add it as result target
buffer = BytesIO()
curl.setopt(pycurl.WRITEFUNCTION, buffer.write)

# Perform the request
curl.perform()

# Get the response code and content
response_code = curl.getinfo(pycurl.RESPONSE_CODE)
response_content = buffer.getvalue().decode('UTF-8')

# Print the response
print(f'Response code: {response_code}')
print(f'Response content: {response_content}')

# Clean up
curl.close()
buffer.close()

Compared to other libraries like requests and httpx pycurl is very low level can be difficult to use however it has access to many advanced features like HTTP3 support that other libraries don't have.

pyCurl doesn't support asynchronous requests which means it can't be used in asynchronous web scraping though can still be used using threads. See mixing sync code using asyncio.to_thread() for more details

Question tagged: Python, HTTP

Related Posts

How to Power-Up LLMs with Web Scraping and RAG

In depth look at how to use LLM and web scraping for RAG applications using either LlamaIndex or LangChain.

How to Scrape Forms

Learn how to scrape forms through a step-by-step guide using HTTP clients and headless browsers.

How to Build a Minimum Advertised Price (MAP) Monitoring Tool

Learn what minimum advertised price monitoring is and how to apply its concept using Python web scraping.