Redirects are a fundamental concept of the HTTP protocols, allowing requests to get turned to other resources containing the desired data.
By default, cURL commands don't follow redirects. For example, let's attempt to request httpbin.dev/absolute-redirect/:n. It redirects the request for x number of times:
curl https://httpbin.dev/absolute-redirect/6
Executing the above command will return nothing, as the request didn't proceed to its final destination. To make cURL follow redirects, we can use the -L
or --location
:
curl -L https://httpbin.dev/absolute-redirect/6
The above cURL request will follow the redirects 6 times and finally return a response:
{
"args": {},
"headers": {
"Accept": [
"*/*"
],
"Accept-Encoding": [
"gzip"
],
"Host": [
"httpbin.dev"
],
"User-Agent": [
"curl/8.4.0"
]
},
"url": "https://httpbin.dev/get"
}
By default, the -L
or --location
cURL options enable redirects following for a maximum of 50 times. To override the redirects limit, we can use the --max-redirs
cURL option:
curl -L https://httpbin.dev/absolute-redirect/51 --max-redirs 51
For more details on cURL, refer to our previous guide.
How to Use cURL For Web Scraping
In this article, we'll go over a step-by-step guide on sending and configuring HTTP requests with cURL. We'll also explore advanced usages of cURL for web scraping, such as scraping dynamic pages and avoiding getting blocked.