How to Follow Redirects In cURL?

Redirects are a fundamental concept of the HTTP protocols, allowing requests to get turned to other resources containing the desired data.

By default, cURL commands don't follow redirects. For example, let's attempt to request httpbin.dev/absolute-redirect/:n. It redirects the request for x number of times:

curl https://httpbin.dev/absolute-redirect/6

Executing the above command will return nothing, as the request didn't proceed to its final destination. To make cURL follow redirects, we can use the -L or --location:

curl -L https://httpbin.dev/absolute-redirect/6

The above cURL request will follow the redirects 6 times and finally return a response:

{
  "args": {},
  "headers": {
    "Accept": [
      "*/*"
    ],
    "Accept-Encoding": [
      "gzip"
    ],
    "Host": [
      "httpbin.dev"
    ],
    "User-Agent": [
      "curl/8.4.0"
    ]
  },
  "url": "https://httpbin.dev/get"
}

By default, the -L or --location cURL options enable redirects following for a maximum of 50 times. To override the redirects limit, we can use the --max-redirs cURL option:

curl -L https://httpbin.dev/absolute-redirect/51 --max-redirs 51

For more details on cURL, refer to our previous guide.

How to Use cURL For Web Scraping

Explore sending and configuring HTTP requests with cURL through a step-by-step guide. You will also explore advanced usages of cURL for web scraping, such as scraping dynamic pages and avoiding getting blocked.

How to Use cURL For Web Scraping
Question tagged: cURL

Related Posts

Sending HTTP Requests With Curlie: A better cURL

In this guide, we'll explore Curlie, a better cURL version. We'll start by defining what Curlie is and how it compares to cURL. We'll also go over a step-by-step guide on using and configuring Curlie to send HTTP requests.

How to Use cURL For Web Scraping

In this article, we'll go over a step-by-step guide on sending and configuring HTTP requests with cURL. We'll also explore advanced usages of cURL for web scraping, such as scraping dynamic pages and avoiding getting blocked.

Use Curl Impersonate to scrape as Chrome or Firefox

Learn how to prevent TLS fingerprinting by impersonating normal web browser configurations. We'll start by explaining what the Curl Impersonate is, how it works, how to install and use it. Finally, we'll explore using it with Python to avoid web scraping blocking.