In this guide, we'll explain how to copy requests as cURL with Edge. We'll copy the requests on review data on web-scraping.dev. However, the same approach can be applied to other websites as well: 1. Go to the page URL where you want to copy the requests. 2. Open the browser developer tools on Edge by pressing the F12 key. 3. Select the Network tab from the top bar. 4. Empty the request log (Ctrl + L) to clear it for the desired request. 5. Activate the request to record it. It can differ based on the target, such as:
Scrolling down.
Clicking on a specific link.
Clicking on the next pagination button.
Filtering the data using filter buttons.
Searching for specific data.
6. Filter the requests by the target request type, Doc (HTML) or Fetch/XHR (JSON). You will find the requests recorded:
7. Identify the target request to copy by clicking it and reviewing its response. 8.Right-click on the request, select copy, and then copy as cURL (bash):
The request is now copied as cURL in the clipboard.
Optional: convert the cURL request into Python using the cURL to Python tool.
Optional: convert the request into ScrapFly API requests from the ScrapFly API player.
We have explained converting cURL requests into Python. However, the same apporach can be used to convert cURL into Node.js and other programming languages using HTTP clients. For further details, refer to our dedicated guide on Postman.
Provided by Scrapfly
This knowledgebase is provided by Scrapfly —
a web scraping API that allows you to scrape any website without getting blocked
and implements a dozens of other web scraping conveniences. Check us out 👇
In this guide, we'll explore Curlie, a better cURL version. We'll start by defining what Curlie is and how it compares to cURL. We'll also go over a step-by-step guide on using and configuring Curlie to send HTTP requests.
In this article, we'll go over a step-by-step guide on sending and configuring HTTP requests with cURL. We'll also explore advanced usages of cURL for web scraping, such as scraping dynamic pages and avoiding getting blocked.
Learn how to prevent TLS fingerprinting by impersonating normal web browser configurations. We'll start by explaining what the Curl Impersonate is, how it works, how to install and use it. Finally, we'll explore using it with Python to avoid web scraping blocking.