API Specification
Discover how to use Scrapfly API - the basics, available parameters and features, error handling and other information related to the API use.
On Steroids
- Smart defaults - scrape without being blocked . Scrapfly pre-configures user-agent and other request headers.
- Anti Scraping Protection feature bypasses all anti-scraping systems.
-
By default, the API responds in JSON. Though, a more efficient
msgpack
format is also available by setting the accept: application/msgpack header. - Gzip compression is available through content-encoding: gzip header.
- Text content is returned as utf-8 while binary is encoded in base64
Quality of Life
- All scrape requests and metadata are automatically tracked on a Web Dashboard
- Multi project/scraper support through Project Management
- Ability to debug and replay scrape requests from the dashboard log page.
- Experiment with the Visual API playground
- Status page with notification subscription.
-
Full API transparency through useful meta headers:
- X-Scrapfly-Api-Cost API Cost billed
- X-Scrapfly-Remaining-Api-Credit Remaining Api Credit, if 0, billed in extra credit
- X-Scrapfly-Account-Concurrent-Usage You current concurrency usage of your account
- X-Scrapfly-Account-Remaining-Concurrent-Usage Maximum concurrency allowed by the account
- X-Scrapfly-Project-Concurrent-Usage Concurrency usage of the project
- X-Scrapfly-Project-Remaining-Concurrent-Usage If the concurrency limit is set on the project otherwise equal to the account concurrency
Billing
Each API request returns billing information about the used API credits.
The X-Scrapfly-Api-Cost
header contains the total
amount of API credits used for this request.
The complete API use breakdown is available in the context.cost
field of the JSON response.
Note that binary and text responses are billed differently.
The result.format
field indicates the response type where html,
json, xml, txt and similar are billed as TEXT
and image, archive, pdf and similar are billed as BINARY
.
Scenario | API Credits Cost |
---|---|
Datacenter Proxies | 1 |
Datacenter Proxies + Browser | 1 + 5 = 6 |
Residential Proxies | 25 |
Residential Proxies + Browser | 25 + 5 = 30 |
Some specific domains have extra fees. Any credit fees are always displayed in cost metrics (like the cost tab in the monitoring entry). |
Data responses (.json
, .csv
, .xml
, .txt
etc.)
that exceed 1Mb are considered to be high bandwidth requests and bandwidth use after the initial 1Mb
is billed as BINARY
bandwidth.
Request body (POST
, PUT
, PATCH
) exceeding 100Kb
are billed as BINARY
bandwidth.
For more on billing, each scrape request features a billing section on the monitoring dashboard with a detailed breakdown of the API credits used.
Downloads are billed in slices of 100kb
. The billed size is available
in the cost details of the response context.cost
field
Network Type | API Credits Cost |
---|---|
Datacenter Proxies | 3 per 100kb |
Residential Proxies | 10 per 100kb |
Manage Spending (Limit, Budget, Predicatable spend)
We offer a variety of tools to help you manage your spending and stay within your budget. Here are some of the ways you can do that:
-
Project
can be used to define a
global limit
Each scrapfly project can be restricted with a specific credit budget and concurrency limits.
-
Throttlers
can be used to define
limits per scraped website and timeframe
Using Throttler's Spending Limit feature, each scrape target can be restricted with a specific credit budget for a given period. For example, you can set a budget of 10_000 credits per day for website A and 100_000 credits per month for website B.
-
API
calls can be defined with a
per call cost budget
You can use the
cost_budget
parameter to set a maximum budget for your web scraping requests.- It's important to set the correct minimum budget for your target to ensure that you can pass through any blocks and pay for any blocked results
- Budget only apply for deterministic configuration, cost related to bandwidth usage could not be known before.
- Regardless of the status code, if the scrape is interrupted because the cost budget has been reached and a scrape attempt has been made, the call is billed based on the scrape attempt settings.
By using these features, you can better manage your spending and ensure that you stay within your budget when using our web scraping API.
Scrape Failed Protection and Fairness Policy
Scrapfly's Scrape Failed Protection and Fairness Policy is in place to ensure that failed scrapes are not billed to our customers. To prevent any abuse of our system, we also have a fairness policy in place.
Under this policy, if more than 30% of the failed traffic with eligible status codes (status codes greater than or equal to 400 and not excluded, see below) are detected within a minimum one-hour period, the fairness policy will be disabled and usage will be billed. Additionally, if an account deliberately scrapes a protected website without success and without using our Anti Scraping Protection (ASP), the account may be suspended at the discretion of our account managers.
The following status codes are eligible for our Scrape Failed Protection and Fairness Policy: status codes greater than or equal to 400 and not excluded (see below).
Excluded status codes: 400, 401, 404, 405, 406, 407, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 422, 424, 426, 428, and 456.Errors
Scrapfly uses conventional HTTP response codes to indicate the success or failure of an API request.
Codes in the 2xx range indicate success.
Codes in the 4xx range indicate an error that failed given the information provided (e.g., a required parameter was omitted, not permitted, max concurrency reached, etc.).
Codes in the 5xx range indicate an error with Scrapfly's servers.
HTTP 422 - Request Failed provide extra headers in order to help as much as possible:
- X-Scrapfly-Reject-Code: Error Code
- X-Scrapfly-Reject-Description: URL to the related documentation
- X-Scrapfly-Reject-Retryable: Indicate if the scrape is retryable
It is important to properly handle HTTP client errors in order to access the error headers and body. These details contain valuable information for troubleshooting, resolving the issue or reaching the support.
HTTP Status Code Summary
200 - OK | Everything worked as expected. |
---|---|
400 - Bad Request | The request was unacceptable, often due to missing a required parameter or a bad value or a bad format. |
401 - Unauthorized | No valid API key provided. |
402 - Payment Required | A payment issue occur and need to be resolved |
403 - Forbidden | The API key doesn't have permissions to perform the request. |
422 - Request Failed | The parameters were valid but the request failed. |
429 - Too Many Requests | All free quota used or max allowed concurrency or domain throttled |
500, 502, 503 - Server Errors | Something went wrong on Scrapfly's end. |
504 - Timeout | The scrape have timeout |
You can check out the full error list to learn more. |
Specification
Discover and learn the full potential of our API to scrape the desired targets.
If you have any questions you can check out the
Frequently asked question section
and ultimately ask on our chat.
By default, the API has a read timeout of 155 seconds. To avoid read timeout errors, you must configure your HTTP client to set the read timeout to 155 seconds. If you need a different timeout value, please refer to the documentation for information on how to control the timeout.
Try out the API straight in your terminal using curl
:
Want to try out the API without coding? Check out our visual API player and test/generate code to use our API.
The default response format is JSON, and the scraped content is available in
result.content
. Your scrape configuration is present in
config
, and other activated feature information is available in
context
.
To get the HTML page directly, refer to the
proxified_response
parameter.