The cURL -Z
or --parallel
options are used to pass a list of URLs to request in parallel. For this parallel execution, cURL can handle up to 50 operations at the same time.
cURL also provides two additional parameters for this option:
- --parallel-max
It specifies the number of connections that cURL can handle simultaneously. - --parallel-immediate
If this parameter is used, cURL will prioritize creating new connections instead of waiting for other connections to finish.
Here is how we can send cURL multiple requests with the --parallel
option:
curl --parallel --parallel-immediate --parallel-max 5 https://httpbin.dev/headers https://httpbin.dev/headers https://httpbin.dev/headers
The above cURL command will send three requests in parallel to the URL httpbin.dev/headers
. It will handle a maximum of 5 simultaneous requests and prioritize creating new connections.
Alternatively, we can specify the URLs to request and pass it to the cURL --config
option. First, create a file urls.txt
and the URLs:
url = "https://httpbin.dev/headers"
url = "https://httpbin.dev/headers"
url = "https://httpbin.dev/headers"
Next, we'll use the same cURL parallel requests command and pass the config file:
curl --parallel --parallel-immediate --parallel-max 10 --parallel-max 5 --config urls.txt
For more details on cURL, refer to our dedicated guide.
How to Use cURL For Web Scraping
In this article, we'll go over a step-by-step guide on sending and configuring HTTP requests with cURL. We'll also explore advanced usages of cURL for web scraping, such as scraping dynamic pages and avoiding getting blocked.