     [Blog](https://scrapfly.io/blog)   /  [curl](https://scrapfly.io/blog/tag/curl)   /  [Sending HTTP Requests With Curlie: A better cURL](https://scrapfly.io/blog/posts/sending-http-requests-with-curlie-a-better-curl)   # Sending HTTP Requests With Curlie: A better cURL

 by [Mazen Ramadan](https://scrapfly.io/blog/author/mazen) Apr 18, 2026 8 min read [\#curl](https://scrapfly.io/blog/tag/curl) [\#http](https://scrapfly.io/blog/tag/http) [\#tools](https://scrapfly.io/blog/tag/tools) 

 [  ](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fsending-http-requests-with-curlie-a-better-curl "Share on LinkedIn")    

 

 

   

cURL is a great command-line tool for sending HTTP requests, which can be a viable asset in the web scraping toolbox. However, its syntax and output can be confusing. What about a better alternative?

In this guide, we'll explore Curlie, a better cURL version. We'll start by defining what Curlie is and how it compares to cURL. We'll also go over a step-by-step guide on using and configuring Curlie to send HTTP requests. Let's get started!

## Key Takeaways

Master Curlie HTTP requests with improved cURL syntax, colorful output formatting, and enhanced debugging capabilities for efficient web scraping workflows.

- Use Curlie as an enhanced cURL alternative with HTTPie-inspired syntax and colorful output formatting
- Configure HTTP requests with simplified syntax while maintaining full cURL functionality
- Implement better debugging and error handling with improved output formatting and error messages
- Use Curlie for API testing and web scraping with enhanced readability and usability
- Configure authentication and headers with simplified syntax for complex HTTP requests
- Apply advanced techniques for proxy configuration, file downloads, and redirect handling

**Get web scraping tips in your inbox**Trusted by 100K+ developers and 30K+ enterprises. Unsubscribe anytime.







## What is Curlie?

[Curlie](https://curlie.org/) is an interface for the regular [cURL](https://curl.se/). Its interface is built on top of [HTTPie](https://httpie.io/), a CLI and HTTP client app for sending HTTP requests in a colorful formatted output.

Curlie combines the cURL features with the easy syntax and output formatting of HTTPie. It allows for writing commands in the syntax of both cURL and HTTPie.

[Using API Clients For Web Scraping: PostmanIn this article, we'll explore the use of API clients for web scraping. We'll start by explaining how to locate hidden API requests on websites. Then, we'll explore importing, manipulating, and exporting them using Postman to develop efficient API-based web scrapers.](https://scrapfly.io/blog/posts/using-api-clients-for-web-scraping-postman)



## How To Install Curlie?

Curlie can be installed for all operating systems using command lines through different package managers.

#### Mac

bash```bash
brew install curlie
# or
curl -sS https://webinstall.dev/curlie | bash
```



#### Linux

bash```bash
curl -sS https://webinstall.dev/curlie | bash
# or
eget rs/curlie -a deb --to=curlie.deb
sudo dpkg -i curlie.deb
```



#### Windows

powershell```powershell
curl.exe -A "MS" https://webinstall.dev/curlie | powershell
```





## How To Use Crulie?

In the following sections, we'll explain using Curlie to send and configure HTTP requests. Curlie accepts the syntax of both cURL and HTTPie and since we covered cURL in a previous guide, **we'll use the HTTPie syntax** in this one.

That being said, **all the cURL options used by Curlie under the hood can be viewed** by adding the `--curl` option.

[How to Use cURL For Web ScrapingIn this article, we'll go over a step-by-step guide on sending and configuring HTTP requests with cURL. We'll also explore advanced usages of cURL for web scraping, such as scraping dynamic pages and avoiding getting blocked.](https://scrapfly.io/blog/posts/how-to-use-curl-for-web-scraping)



### Configuring HTTP Method

All the Curlie requests start with the `curlie` command. By default, all the requests sent follow the GET HTTP method:

shell```shell
curlie https://httpbin.dev/get
```



Running the above command will send a `GET` request and return the response formatted. **It will also return the response headers**, which isn't enabled by default with cURL:

bash```bash
{
    "args": {

    },
    "headers": {
        "Accept": [
            "application/json, */*"
        ],
        "Accept-Encoding": [
            "gzip"
        ],
        "Host": [
            "httpbin.dev"
        ],
        "User-Agent": [
            "curl/8.4.0"
        ]
    },
    "origin": "156.192.187.116",
    "url": "https://httpbin.dev/get"
}
HTTP/1.1 200 OK
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Alt-Svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
Content-Length: 288
Content-Security-Policy: frame-ancestors 'self' *.httpbin.dev; font-src 'self' *.httpbin.dev; default-src 'self' *.httpbin.dev; img-src 'self' *.httpbin.dev https://cdn.scrapfly.io; media-src 'self' *.httpbin.dev; script-src 'self' 'unsafe-inline' 'unsafe-eval' *.httpbin.dev; style-src 'self' 'unsafe-inline' *.httpbin.dev https://unpkg.com; frame-src 'self' *.httpbin.dev; worker-src 'self' *.httpbin.dev; connect-src 'self' *.httpbin.dev
Content-Type: application/json; encoding=utf-8
Date: Wed, 06 Mar 2024 17:47:12 GMT
Permissions-Policy: fullscreen=(self), autoplay=*, geolocation=(), camera=()
Referrer-Policy: strict-origin-when-cross-origin
Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
X-Content-Type-Options: nosniff
X-Xss-Protection: 1; mode=block
```



**To change the Curlie HTTP method, we can use the `-v` option**. For example, here is how we can send a POST request with Curlie. The same approach can be followed for other HTTP methods (HEAD, PUT, DELETE, etc.):

shell```shell
curlie -v POST https://httpbin.dev/anything
```





### Adding Headers, Cookies and Body

#### Headers

Adding [How Headers Are Used to Block Web Scrapers and How to Fix It](https://scrapfly.io/blog/posts/how-to-avoid-web-scraping-blocking-headers) to Curlie is pretty straightforward. All we have to do is **specify the header name and its value separated by a colon**:

shell```shell
curlie https://httpbin.dev/headers Content-Type:"application/json" User-Agent:"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/113.0"
```



In the above command, we add User-Agent and Content-Type headers. The response will include the modified headers:

json```json
{
    "headers": {
        ....
        "Content-Type": [
            "application/json"
        ],
        "User-Agent": [
            "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/113.0"
        ]
    }
}
```



#### Cookies

[How to Handle Cookies in Web Scraping](https://scrapfly.io/blog/posts/how-to-handle-cookies-in-web-scraping) in Curlie follow the same approach of adding headers. They can be added by appending them to a `Cookie` header with the name and value pairs:

shell```shell
Curlie https://httpbin.dev/cookies Cookie:some_cookie=foo
```



json```json
{
    "some_cookie": "foo"
}
```



#### Request Body

Lastly, let's explore **adding a request body for POST requests**. For this, we can simply **add the data as key-value pairs**, which will be converted to JSON by Curlie:

shell```shell
curlie https://httpbin.dev/anything key1=value1 key2=value2
```



json```json
{
    ....
    "json": {
        "key1": "value1",
        "key2": "value2"
    }
}
```





### Downloading Files

Sending HTTP requests to download binary data is a common use case. Just like the regular cURL, Curlie allows for downloading binary data using the `-O` option:

shell```shell
curlie -O https://web-scraping.dev/assets/pdf/tos.pdf/tos.pdf
```



The above Curlie command will download a PDF from [web-scraping.dev](https://web-scraping.dev/login) to the current directory. To change the downloaded file directory, we can use the `--output-dir` option:

shell```shell
curlie -O --create-dirs --output-dir /eula/pdfs https://web-scraping.dev/assets/pdf/tos.pdf/tos.pdf
```





Scrapfly

#### Scale your web scraping effortlessly

Scrapfly handles proxies, browsers, and anti-bot bypass — so you can focus on data.

[Try Free →](https://scrapfly.io/register)### Following Redirects

Just like cURL, Curlie doesn't follow HTTP redirects by default. **To follow redirects with Curlie requests, we can use the `--location` or `-L` options**:

shell```shell
curlie --location https://httpbin.dev/absolute-redirect/6
```



The above endpoint redirects the request six times. The `--location` option will allow Curlie to follow them until the final resource:

json```json
{
    "args": {

    },
    "headers": {
        "Accept": [
            "application/json, */*"
        ],
        "Accept-Encoding": [
            "gzip"
        ],
        "Host": [
            "httpbin.dev"
        ],
        "User-Agent": [
            "curl/8.4.0"
        ]
    },
    "url": "https://httpbin.dev/get"
}
```



By default, **Curlie has a maximum of 50 following redirects**. The maximum redirects limit can be overridden using the `--max-redirs` option:

shell```shell
curl -L https://httpbin.dev/absolute-redirect/51 --max-redirs 51
```





### Basic Authentication

Basic authentication requires simple credential data: **username and password**. For example, requesting `https://httpbin.dev/basic-auth/user/passwd` from the browser will require the credentials before proceeding with the request:



Basic auth exampleTo set basic authentication with Curlie, we can use the `--user` or `-u` options:

shell```shell
curlie --user user:passwd --basic https://httpbin.dev/basic-auth/user/passwd
```



From the response, we can see that the request was authenticated:

json```json
{
    "authorized": true,
    "user": "user"
}
```



Curlie can also handle different types of authentication, such as **cookie and bearer token authentication**. For the detailed instructions, refer to our guide on managing authentication with cURL.



### Adding Proxies

Websites use IP addresses to identify potential traffic abuse with specific IP addresses, leading to blocking them.

Hence, using proxies, especially for web scraping, allows for the **distribution of the traffic load across multiple IP addresses**. This makes it harder for websites to detect the IP addresses, preventing their blocking.

[How to Avoid Web Scraper IP Blocking?How IP addresses are used in web scraping blocking. Understanding IP metadata and fingerprinting techniques to avoid web scraper blocks.](https://scrapfly.io/blog/posts/how-to-avoid-web-scraping-blocking-ip-addresses)

To use proxies with Curlie, we can use the `-x` or `--proxy` options, followed by the proxy type, domain, and port:

shell```shell
# HTTP
curl -x http://proxy_domain.com:8080 https://httpbin.dev/ip
# HTTPS
curl -x https://proxy_domain.com:8080 https://httpbin.dev/ip
# Proxies with crednetials
curl -x https://username:password@proxy.proxy_domain.com:8080 https://httpbin.dev/ip
# SOCKS5
curl -x socks5://proxy_domain.com:8080 https://httpbin.dev/ip
```



For further details on proxies, including their types, differences, and how they compare, refer to our dedicated guide.

[The Complete Guide To Using Proxies For Web ScrapingIntroduction to proxy usage in web scraping. What types of proxies are there? How to evaluate proxy providers and avoid common issues.](https://scrapfly.io/blog/posts/introduction-to-proxies-in-web-scraping)



## FAQ

How does Curlie syntax differ from cURL for complex HTTP requests with multiple headers?Curlie uses HTTPie-style syntax which is more readable: `curlie URL Header:value header2:value2` vs [cURL](https://scrapfly.io/blog/posts/how-to-use-curl-for-web-scraping) `-H "Header: value" -H "Header2: value2"`. Curlie automatically formats JSON responses and shows headers by default, while cURL requires `-i` flag for headers.







Can Curlie handle custom authentication methods like OAuth2 or JWT bearer tokens?Yes, Curlie supports various authentication methods including OAuth2 bearer tokens using `Authorization:Bearer token`, basic auth with `--user username:password`, and custom headers. For OAuth2, use `Authorization:Bearer your_access_token` header.







Does Curlie support the same proxy protocols as cURL (HTTP, HTTPS, SOCKS5)?Yes, Curlie supports all the same proxy protocols as cURL including HTTP (`-x http://proxy:port`), HTTPS (`-x https://proxy:port`), and SOCKS5 (`-x socks5://proxy:port`). It also supports proxy authentication with `-x http://user:pass@proxy:port`.







Why would I use Curlie instead of Postman or HTTPie for API testing?Curlie combines [cURL](https://scrapfly.io/blog/posts/how-to-use-curl-for-web-scraping) with HTTPie's user-friendly syntax and output formatting. It's better for command-line workflows, automation scripts, and when you need cURL compatibility with improved readability. [Postman](https://scrapfly.io/blog/posts/using-api-clients-for-web-scraping-postman) is better for GUI-based testing, while HTTPie is more focused on API testing.







How do I troubleshoot 'command not found' errors after installing Curlie on macOS or Linux?Ensure Curlie is in your PATH by checking `which curlie` or `curlie --version`. On macOS with Homebrew, try `brew link curlie`. On Linux, add the installation directory to PATH in your shell profile (`.bashrc`, `.zshrc`). Restart your terminal or run `source ~/.bashrc`.







Can I web scrape with Curlie?Yes, but using Curlie for web scraping is limited to extracting shallow amounts of data or for development and debugging purposes. In a previous guide, we covered using cURL for web scraping, which can also be applied with Curlie as well.







Are there alternatives for Curlie?Yes, Curl Impersonate is a modified cURL version that prevents cURL blocking by mimicking Chrome and Firefox configurations. Another alternative HTTP client for cURL is Postman. We have covered both [Curl Impersonate](https://scrapfly.io/blog/posts/curl-impersonate-scrape-chrome-firefox-tls-http2-fingerprint) and [Using API Clients For Web Scraping: Postman](https://scrapfly.io/blog/posts/using-api-clients-for-web-scraping-postman) in previous guides.









## Summary

In this article, we explained Curlie, what it is, and how it differs from the regular cURL. We went through a step-by-step guide on using it to configure and send HTTP requests. We have covered:

- Sending HTTP requests with different HTTP methods.
- Configuring the headers and cookies.
- Downloading binary data.
- Adding proxies.



 

    Table of Contents- [Key Takeaways](#key-takeaways)
- [What is Curlie?](#what-is-curlie)
- [How To Install Curlie?](#how-to-install-curlie)
- [How To Use Crulie?](#how-to-use-crulie)
- [Configuring HTTP Method](#configuring-http-method)
- [Adding Headers, Cookies and Body](#adding-headers-cookies-and-body)
- [Downloading Files](#downloading-files)
- [Following Redirects](#following-redirects)
- [Basic Authentication](#basic-authentication)
- [Adding Proxies](#adding-proxies)
- [FAQ](#faq)
- [Summary](#summary)
 
    Join the Newsletter  Get monthly web scraping insights 

 

  



Scale Your Web Scraping

Anti-bot bypass, browser rendering, and rotating proxies, all in one API. Start with 1,000 free credits.

  No credit card required  1,000 free API credits  Anti-bot bypass included 

 [Start Free](https://scrapfly.io/register) [View Docs](https://scrapfly.io/docs/onboarding) 

 Not ready? Get our newsletter instead. 

 

## Explore this Article with AI

 [ ChatGPT ](https://chat.openai.com/?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fsending-http-requests-with-curlie-a-better-curl) [ Gemini ](https://www.google.com/search?udm=50&aep=11&q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fsending-http-requests-with-curlie-a-better-curl) [ Grok ](https://x.com/i/grok?text=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fsending-http-requests-with-curlie-a-better-curl) [ Perplexity ](https://www.perplexity.ai/search/new?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fsending-http-requests-with-curlie-a-better-curl) [ Claude ](https://claude.ai/new?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fsending-http-requests-with-curlie-a-better-curl) 



 ## Related Articles

 [  

 http tools 

### How to Use cURL For Web Scraping

In this article, we'll go over a step-by-step guide on sending and configuring HTTP requests with cURL. We'll also explo...

 

 ](https://scrapfly.io/blog/posts/how-to-use-curl-for-web-scraping) [  

 curl 

### How to Use cURL GET Requests

Here's everything you need to know about cURL GET requests and some common pitfalls you should avoid.

 

 ](https://scrapfly.io/blog/posts/how-to-use-curl-get-requests) [  

 http curl 

### How to Ignore cURL SSL Errors

Learn to handle SSL errors in cURL, including using self-signed certificates. Explore common issues, safe practices.

 

 ](https://scrapfly.io/blog/posts/guide-to-curl-ignore-ssl-errors) 

  ## Related Questions

- [ Q How To Send cURL POST Requests? ](https://scrapfly.io/blog/answers/how-to-send-a-post-request-using-curl)
- [ Q How to Send a HEAD Request With cURL? ](https://scrapfly.io/blog/answers/how-to-send-curl-head-requests)
- [ Q How To Send Multiple cURL Requests in Parallel? ](https://scrapfly.io/blog/answers/how-to-send-multiple-curl-requests-in-parallel)
- [ Q How to Set cURL Authentication - Full Examples Guide ](https://scrapfly.io/blog/answers/how-to-set-authorization-with-curl-full-examples-guide)
 
  



   



 Scale your web scraping effortlessly, **1,000 free credits** [Start Free](https://scrapfly.io/register)