     [Blog](https://scrapfly.io/blog)   /  [How to Use cURL to Download Files](https://scrapfly.io/blog/posts/how-to-curl-download-file)   # How to Use cURL to Download Files

 by [Mazen Ramadan](https://scrapfly.io/blog/author/mazen) Apr 01, 2026 13 min read [  ](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-curl-download-file "Share on LinkedIn")    

 

 

      

[Curl](https://curl.se/), short for "Client URL," is a versatile command-line tool used for transferring data with URLs. It's widely favored by developers and system administrators for its ability to interact with a multitude of protocols such as HTTP, HTTPS, FTP, and more.

Using curl to download files simplifies the process by enabling direct command-line interaction with web resources. Curl is not only efficient and lightweight, operating without the need for a graphical interface, but also cross-platform, working seamlessly on Linux, macOS, and Windows systems.

In this article, we'll explore how to use curl to download a file from the web, covering various use cases and demonstrating the tool's versatility.

## Key Takeaways

Master curl download file techniques using advanced options, resume capabilities, and automation for reliable file transfer and data collection workflows.

- Use curl download file with advanced options like resume, progress tracking, and error handling
- Configure proper headers and authentication for secure file downloads and API interactions
- Implement resume capabilities for large file downloads and interrupted transfers
- Use cURL with automation tools like cron for scheduled downloads and data collection
- Apply rate limiting and retry mechanisms for reliable file download operations
- Use specialized tools like ScrapFly for automated file downloads with anti-blocking features

**Get web scraping tips in your inbox**Trusted by 100K+ developers and 30K+ enterprises. Unsubscribe anytime.







## Why Use Curl to Download Files?

Curl stands out as an exceptional file downloading tool, offering a robust set of features that make it indispensable for developers. Here's what makes curl particularly powerful for downloading files:

**Multi-Protocol Support**

- Handles various protocols like HTTP, HTTPS, FTP, and SFTP.
- Eliminates the need for multiple tools when working with different protocols.

**Resume Interrupted Downloads**

- Use the `-C -` option to continue downloads from where they left off.
- Saves time and bandwidth by avoiding the need to restart downloads.

**Bandwidth Management**

- Limit download speeds using `--limit-rate` to manage bandwidth usage.
- Prevents downloads from consuming all available network resources.

**Proxy Support**

- Easily configure proxies using options like `-x` or `--proxy`.
- Supports various proxy types, including HTTP, HTTPS, SOCKS4, and SOCKS5.

**Authentication Handling**

- Supports a range of authentication methods, including Basic, Digest, NTLM, and OAuth.
- Access protected resources seamlessly.

**Secure Transfers**

- Supports SSL/TLS protocols for secure file transfers.
- Verify SSL certificates and use secure authentication methods.

**Cross-Platform Compatibility**

- Available on Linux, macOS, Windows, and more.
- Consistent functionality across different operating systems.

**Automation and Scripting**

- Easily integrates into scripts for automated tasks.
- Ideal for scheduled downloads using cron jobs or Windows Task Scheduler.

Curl's robust feature set makes it an excellent choice for downloading files, whether you're handling simple tasks or complex download operations. Its flexibility and efficiency empower users to manage downloads effectively in various environments.

You can learn more about curl and its options in our article about [using curl for web-scraping](https://scrapfly.io/blog/posts/how-to-use-curl-for-web-scraping)

Now let's explore the basic usage of curl for downloading files and then dive deeper into more complex and unconventional scenarios.

## Curl Basic File Download Options

By default, when curl is run on a file URL without any extra options, the file content is displayed is the terminal.

bash```bash
curl https://web-scraping.dev/assets/pdf/tos.pdf
```



However, you can use curl to save to file with its original name using the `-O` (uppercase "O" for Output) option:

bash```bash
curl -O https://web-scraping.dev/assets/pdf/tos.pdf
```



This command saves the file as `tos.pdf`, retaining the original filename.

### Custom File Name on Download

To save the downloaded file with a custom name, use the `-o` (lowercase "o") option followed by the desired filename:

bash```bash
curl -o [filename] [URL]
```



**Example:**

bash```bash
curl -o web-scraping-tos.pdf https://web-scraping.dev/assets/pdf/tos.pdf
```



This command downloads `tos.pdf` and saves it as `web-scraping-tos.pdf` on your local machine.

### Show Progress Bar / Download Silently

Curl show a progress meter by default. However, you can suppress the progress meter and show a simple progress bar instead.

**Show Progress Bar**

Replace the default progress meter with a simple progress bar using `--progress-bar`:

bash```bash
curl -O --progress-bar https://web-scraping.dev/assets/pdf/tos.pdf
```



**Download Silently**

To suppress all output, including progress and error messages, use the `-s` or `--silent` option:

bash```bash
curl -O -s https://web-scraping.dev/assets/pdf/tos.pdf
```



**Silent Mode with Error Messages**

If you want to hide the progress meter but still see error messages, combine `-s` with `-S`:

bash```bash
curl -O -s -S https://web-scraping.dev/assets/pdf/tos.pdf
```



### Retry for Unstable Connections

For unreliable network connections, you can configure curl to retry downloads automatically:

**Set Number of Retries**

Use the `--retry` option followed by the number of retry attempts:

bash```bash
curl -O --retry [number] [URL]
```



**Example:**

bash```bash
curl -O --retry 5 https://web-scraping.dev/assets/pdf/tos.pdf
```



This command retries the download up to 5 times upon failure.

**Specify Retry Delay**

To add a delay between retries, use `--retry-delay`:

bash```bash
curl -O --retry 5 --retry-delay [seconds] [URL]
```



**Example:**

bash```bash
curl -O --retry 5 --retry-delay 10 https://web-scraping.dev/assets/pdf/tos.pdf
```



This adds a 10-second pause between each retry attempt.

**Retry on All Errors**

By default, curl retries on transient errors. To make it retry on all errors, use `--retry-all-errors`:

bash```bash
curl -O --retry 5 --retry-all-errors [URL]
```



**Example:**

bash```bash
curl -O --retry 5 --retry-all-errors https://web-scraping.dev/assets/pdf/tos.pdf
```



## Handling Large File Downloads

Downloading large files can pose challenges such as network congestion or impacting other users on the same network. Curl offers options to manage these issues effectively.

To prevent a large download from consuming all your available bandwidth, you can limit the download speed using the `--limit-rate` option:

bash```bash
curl -O --limit-rate [speed] [URL]
```



**Example:**

bash```bash
curl -O --limit-rate 500k https://web-scraping.dev/assets/pdf/tos.pdf
```



This command limits the download speed to 500 kilobytes per second. You can specify the speed using suffixes:

- **k** or **K** for kilobytes (e.g., `500k`)
- **m** or **M** for megabytes (e.g., `2M`)

**Benefits:**

- **Bandwidth Management**: Ensures other network activities aren't slowed down.
- **Network Stability**: Reduces the risk of connection drops due to high bandwidth usage.

## Insecure Downloading

In some cases, you might need to use cURL to download a file from a server with an invalid or self-signed SSL certificate. Curl verifies SSL certificates by default, which can block these downloads.

**Disable SSL Certificate Verification**

**Warning:** Disabling SSL verification can expose you to security risks like man-in-the-middle attacks. Use this option only when you're certain about the server's trustworthiness.

To bypass SSL certificate checks, use the `-k` or `--insecure` option:

bash```bash
curl -O -k https://web-scraping.dev/assets/pdf/tos.pdf
```



This command tells curl to ignore SSL certificate validation and proceed with the download.

## Verifying File Integrity

Ensuring that a downloaded file hasn't been tampered with is crucial, especially for important or large files. You can verify file integrity using checksum tools like `sha256sum`.

**Using `sha256sum` to Verify Downloads**

**Steps:**

1. **Download the File and Its Checksum**

bash```bash
curl -O https://example.com/file.zip
curl -O https://example.com/file.zip.sha256
```



2. **Verify the Checksum**

bash```bash
sha256sum -c file.zip.sha256
```



- The `-c` option tells `sha256sum` to check the file against the provided checksum.

**Manual Verification:**

If the checksum isn't provided in a file:

1. **Get the Expected Checksum**

- Obtain the checksum value from the website or provider.

2. **Calculate the Downloaded File's Checksum**

bash```bash
sha256sum file.zip
```



- This command outputs a checksum that you can compare with the expected value.

**Example Output:**

```
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855  file.zip
```



**Benefits:**

- **Security**: Confirms the file hasn't been altered maliciously.
- **Data Integrity**: Ensures the file isn't corrupted due to network issues.

## Handling Authentication

When downloading files from protected resources, authentication is often required. Curl supports various authentication methods to access these resources.

**Authorization Header**

To include an authorization token or API key in your request, use the `-H` option to add a custom header:

bash```bash
curl -O -H "Authorization: Bearer your_token_here" https://api.example.com/securefile.zip
```



This example uses bearer token authentication, but you can use any other authentication method supported by curl.

**Cookie Session**

If authentication relies on session cookies, you can manage cookies using curl:

When logging in, save the session cookies to a file using the `-c` option:

bash```bash
curl -c cookies.txt -d "username=user&password=pass" https://example.com/login
```



- The `-d` option sends POST data for login credentials.
- Cookies received during login are saved to `cookies.txt`.
- **Use Saved Cookies**

Use the saved cookies for subsequent requests with the `-b` option:

bash```bash
curl -O -b cookies.txt https://example.com/securefile.zip
```



**Benefits:**

- **Session Management**: Maintains login sessions across multiple requests.
- **Automated Workflows**: Scripts can handle login and file download processes seamlessly.

Utilizing these options enhances the reliability of your file downloads, ensuring efficiency, security, and smoother operations even with unstable internet connections.



## Curl Command Builder

To simplify the process of creating cURL commands for file downloads, we've created a curl command builder tool. This interactive form allows you to select various options and generate the corresponding curl command instantly:



 File URL:  

Download Options

   Save with original name (-O) Write output to a file named as the remote file    Custom filename (-o)   

   Resume download (-C -) Resume interrupted download  

Progress &amp; Output

   Progress bar Display transfer progress as a bar    Silent mode Suppress all output    Show only errors Suppress progress but show error messages  

Retry &amp; Speed

   Retry attempts   

   Retry delay (seconds)   

   Limit speed   

   Retry all errors Retry on all error types  

Security &amp; Auth

   Proxy server   

   Basic auth   

   Custom header   

   Ignore SSL validation Skip SSL certificate verification  

Additional Options

   User agent   

   Cookie file   

 

 

 ```
curl
```

  

 

 





Scrapfly

#### Scale your web scraping effortlessly

Scrapfly handles proxies, browsers, and anti-bot bypass — so you can focus on data.

[Try Free →](https://scrapfly.io/register)## Automating Curl Downloads with Crontab

Automating file downloads ensures you always have the latest data without manual effort. By integrating `curl` with `crontab`, you can schedule downloads to run at specified times, enhancing efficiency and productivity.

#### What Is Crontab?

Crontab is a time-based job scheduler in Unix-like operating systems. It allows users to schedule scripts or commands to run automatically at predefined times or intervals.

#### Steps to Automate Downloads Using Crontab

**1. Create a Download Script (Optional)**

**Write the Script**

Create a shell script (e.g., `download.sh`) that contains your `curl` command:

bash```bash
#!/bin/bash
# Navigate to the desired directory
cd /path/to/download/directory

# Download the file using curl
curl -O https://example.com/file.zip
```



**Make the Script Executable**

bash```bash
chmod +x /path/to/download.sh
```



**2. Edit the Crontab File**

**Open Crontab Editor**

bash```bash
crontab -e
```



**Add a New Cron Job**

Insert a line following the cron syntax:

```
* * * * * /path/to/command
```



**Example: Schedule the Script to Run Daily at 2 AM**

```
0 2 * * * /path/to/download.sh
```



**Fields Explained:**

- **Minute:** `0`
- **Hour:** `2` (2 AM)
- **Day of Month:** `*` (Every day)
- **Month:** `*` (Every month)
- **Day of Week:** `*` (Every day of the week)

**3. Save and Exit**After adding your cron job, save the file. The cron service will automatically pick up the new schedule.

Automating `curl` downloads with crontab streamlines your workflow, ensuring timely and consistent data retrieval. Whether you're updating datasets, synchronizing files, or performing regular backups, this combination offers a robust solution for scheduled tasks.

## Bypassing Download Blocks

When attempting to use curl to download files, you might encounter situations where the download is blocked or fails. This can be due to various reasons such as network restrictions, server configurations, or security measures that prevent automated requests.

The most common reason for download blocks is that the server is blocking automated requests. To bypass this, you can add a custom browser user-agent string to your request headers to mimic a real browser request.

bash```bash
curl -A "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36" https://example.com/file.zip
```



This example uses the `-A` option to set a custom user-agent string. You can replace the string with any other user-agent string that mimics a real browser request.

Changing the user-agent string is the most basic method to bypass download blocks. However, some servers are sophisticated enough to still block requests with custom user-agent strings. In these cases, you may need to use a more advanced tools like curl-impersonate.

[Curl-impersonate](https://github.com/lwthiker/curl-impersonate) is a modified version of cURL that simulates the TLS fingerprint of major web browsers, like Chrome, Firefox, Edge and Safari, by mimicing their TLS and HTTP2 configuration. It also overrides the default cURL headers, such as the User-Agent, with regular header values. This makes the cURL Impersonate requests look like those sent from the browsers, preventing the firewalls from detecting the usage of HTTP clients.

You can learn more about curl-impersonate in our dedicated guide on [using curl-impersonate for web-scraping](https://scrapfly.io/blog/posts/curl-impersonate-scrape-chrome-firefox-tls-http2-fingerprint)



## Power Up File Downloads with Scrapfly

Downloading files programmatically can quickly become a cumbersome task. Especially when the files are protected against automation and bots using sophisticated bot protection systems that cannot be bypassed with tools like `curl-impersonate`.

Scrapfly has millions of proxies and connection fingerprints that can be used to bypass protection against automated traffic and significantly simplify your file download process.



Check out [Scrapfly's web scraping API](https://scrapfly.io/web-scraping-api) for all the details.

For example, here is how to use Scrapfly's web scraping API to download a file, we will use Scrapfly's Pyhton SDK to call the API:

python```python
from scrapfly import ScrapflyClient, ScrapeConfig
import base64

scrapfly = ScrapflyClient(key="YOUR SCRAPFLY KEY")

FILE_URL = "https://web-scraping.dev/assets/pdf/tos.pdf"

response = scrapfly.scrape(
    ScrapeConfig(
        url=FILE_URL,
        asp=True,
    )
)

## decode base64 file data
file_data = base64.b64decode(response.result.content)

with open("tos.pdf", "wb") as f:
    f.write(file_data)

```



Scrapfly's API automatically detects that the requested URL is a file and return the binary content of the file encoded with base64. Which is why we decoded the content returned by the API before we saved it to a file called `tos.pdf`.



## FAQ

Can I resume an interrupted download with `curl`?Yes, you can resume an interrupted download by using the appropriate option in `curl` that allows you to continue from where the download stopped, which is especially useful for large files or unstable connections.







Is wget a better alternative to curl for downloading files?`wget` is another command-line tool specifically designed for downloading files. While `curl` is versatile and supports various protocols and features, `wget` is often preferred for its simplicity in handling recursive downloads and its ability to download entire websites. You can learn more about the differenced between curl and wget in our dedicated [curl vs wget article](https://scrapfly.io/blog/posts/curl-vs-wget)







How do I download multiple files at once using `curl`?You can download multiple files simultaneously by specifying multiple URLs in a single command or by using scripting methods to loop through a list of URLs, allowing for efficient batch downloads.







Can I download files through a proxy with cURL?Yes, use the `-x` flag to route downloads through a proxy server. This is useful for bypassing geographic restrictions or distributing requests. For more on proxy usage with cURL, see our [cURL web scraping guide](https://scrapfly.io/blog/posts/how-to-use-curl-for-web-scraping).









## Summary

Curl is a versatile tool when it comes to downloading files, offering:

- **Multi-Protocol Support**: Works with HTTP, HTTPS, FTP, and more.
- **Resume Capability**: Restarts interrupted downloads with ease.
- **Proxy and Bandwidth Management**: Supports proxies and limits download speed.
- **Authentication Support**: Handles cookies, tokens, and secured resources.
- **Automation**: Integrates with scripts and scheduling tools like crontab.

For advanced needs, tools like curl-impersonate or services like Scrapfly can bypass sophisticated bot protections, offering:

- **Enhanced Bypass Capabilities**: Overcomes anti-bot systems.
- **API Flexibility**: Simplifies complex file downloads with robust solutions.

Curl's feature-set make it essential for managing simple to complex downloads efficiently.



 

    Table of Contents- [Key Takeaways](#key-takeaways)
- [Why Use Curl to Download Files?](#why-use-curl-to-download-files)
- [Curl Basic File Download Options](#curl-basic-file-download-options)
- [Custom File Name on Download](#custom-file-name-on-download)
- [Show Progress Bar / Download Silently](#show-progress-bar-download-silently)
- [Retry for Unstable Connections](#retry-for-unstable-connections)
- [Handling Large File Downloads](#handling-large-file-downloads)
- [Insecure Downloading](#insecure-downloading)
- [Verifying File Integrity](#verifying-file-integrity)
- [Handling Authentication](#handling-authentication)
- [Curl Command Builder](#curl-command-builder)
- [Automating Curl Downloads with Crontab](#automating-curl-downloads-with-crontab)
- [Bypassing Download Blocks](#bypassing-download-blocks)
- [Power Up File Downloads with Scrapfly](#power-up-file-downloads-with-scrapfly)
- [FAQ](#faq)
- [Summary](#summary)
 
    Join the Newsletter  Get monthly web scraping insights 

 

  



Scale Your Web Scraping

Anti-bot bypass, browser rendering, and rotating proxies, all in one API. Start with 1,000 free credits.

  No credit card required  1,000 free API credits  Anti-bot bypass included 

 [Start Free](https://scrapfly.io/register) [View Docs](https://scrapfly.io/docs/onboarding) 

 Not ready? Get our newsletter instead. 

 

## Explore this Article with AI

 [ ChatGPT ](https://chat.openai.com/?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-curl-download-file) [ Gemini ](https://www.google.com/search?udm=50&aep=11&q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-curl-download-file) [ Grok ](https://x.com/i/grok?text=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-curl-download-file) [ Perplexity ](https://www.perplexity.ai/search/new?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-curl-download-file) [ Claude ](https://claude.ai/new?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-curl-download-file) 



 ## Related Articles

 [     

### What is HTTP 405 Error? (Method Not Allowed)

HTTP error codes can be confusing, especially when they disrupt your web scraping or automation tasks. One such error is...

 

 ](https://scrapfly.io/blog/posts/what-is-http-405-error) [     

### cURL vs Wget: Key Differences Explained

In the world of web command-line tools, two names frequently come up: cURL and wget. Whether you're a web developer, sys...

 

 ](https://scrapfly.io/blog/posts/curl-vs-wget) [  

 python crawling 

### How to Find All URLs on a Domain

Learn how to efficiently find all URLs on a domain using Python and web crawling. Guide on how to crawl entire domain to...

 

 ](https://scrapfly.io/blog/posts/how-to-find-all-urls-on-a-domain) 

  ## Related Questions

- [ Q How to load local files in Puppeteer? ](https://scrapfly.io/blog/answers/how-to-load-local-files-in-puppeteer)
- [ Q How to save and load cookies in Python requests? ](https://scrapfly.io/blog/answers/save-and-load-cookies-in-requests-python)
- [ Q How To Send Multiple cURL Requests in Parallel? ](https://scrapfly.io/blog/answers/how-to-send-multiple-curl-requests-in-parallel)
- [ Q How to load local files in Playwright? ](https://scrapfly.io/blog/answers/how-to-load-local-files-in-playwright)
 
  



   



 Scale your web scraping effortlessly, **1,000 free credits** [Start Free](https://scrapfly.io/register)