Web Scraper Billing

Web scraping API is billed using a credit system where each scrape request costs a set amount of credits based on enabled features like JavaScript rendering, proxy usage, anti-bot bypass etc. For general billing policy for all Scrapfly products see the Billing Policy & Overview page.

See Your Billing Dashboard

Billing

Each API request returns billing information about the used API credits. The X-Scrapfly-Api-Cost header contains the total amount of API credits used for this request. The complete API use breakdown is available in the context.cost field of the JSON response.

Note that binary and text responses are billed differently. The result.format field indicates the response type where html, json, xml, txt and similar are billed as TEXT and image, archive, pdf and similar are billed as BINARY.

Scenario API Credits Cost
Datacenter Proxies 1
Datacenter Proxies + Browser 1 + 5 = 6
Residential Proxies 25
Residential Proxies + Browser 25 + 5 = 30
Some specific domains have extra fees. Any credit fees are always displayed in cost metrics (like the cost tab in the monitoring entry).
  • Data responses (.json, .csv, .xml, .txt etc.) and large HTML response that exceed 1Mb are considered to be high bandwidth requests and bandwidth use after the initial 1Mb is billed as BINARY bandwidth.
  • Request body (POST, PUT, PATCH) exceeding 100Kb sent through the API are billed as BINARY bandwidth.
  • With browser rendering, 3MB of data is included, additional data is billed as BINARY bandwidth. Some website can load some huge static files such as JSON, CSS, JS, etc. that can increase the bandwidth usage, those are cached on our private CDN following the HTTP Cache policy defined by the website. Once it's served from our private CDN, it's not counted in the bandwidth usage.
    Browser rendering is already optimized the bandwidth by preventing to load images and cache static assets

For more on billing, each scrape request features a billing section on the monitoring dashboard with a detailed breakdown of the API credits used.

Downloads are billed in slices of 100kb, the first megabyte is free of charge. The billed size is available in the cost details of the response context.cost field

Network Type API Credits Cost
Datacenter Proxies 3 per 100kb
Residential Proxies 10 per 100kb

The scrape cost is calculated based on used features (like browser rendering, proxy use) though note that the ASP feature has the power to adjust other features (like upgrade proxy) to bypass anti-bot protection.

Manage Spending

We offer a variety of tools to help you manage your spending and stay within your budget. Here are some of the ways you can do that:

  • Project can be used to define a global limit

    Each Scrapfly project can be restricted with a specific credit budget and concurrency limits and you can disable extra usage.

  • Throttlers can be used to define limits per scraped website and timeframe

    Using Throttler's Spending Limit feature, each scrape target can be restricted with a specific credit budget for a given period. For example, you can set a budget of 10_000 credits per day for website A and 100_000 credits per month for website B.

  • API calls can be defined with a per call cost budget

    You can use the cost_budget parameter to set a maximum budget for your web scraping requests.

    • It's important to set the correct minimum budget for your target to ensure that you can pass through any blocks and pay for any blocked results
    • Budget only apply for deterministic configuration, cost related to bandwidth usage could not be known before.
    • Regardless of the status code, if the scrape is interrupted because the cost budget has been reached and a scrape attempt has been made, the call is billed based on the scrape attempt settings.
By default, all account of hard limit of extra usage to avoid any major issue. You can't do 125% of your quota in extra usage. That mean for a quota of 1M API Credit, you can perform 1,25M API Credit in extra, for a total of 2,25M API Credit.

If you reach this limit the account is suspended and an account manager will reach you to figure out the situation.

By using these features, you can better manage your spending and ensure that you stay within your budget when using our web scraping API.

Scrape Failed Protection and Fairness Policy

Scrapfly's Scrape Failed Protection and Fairness Policy is in place to ensure that failed scrapes are not billed to our customers. To prevent any abuse of our system, we also have a fairness policy in place.

Under this policy, if more than 30% of the failed traffic with eligible status codes (status codes greater than or equal to 400 and not excluded, see below) are detected within a minimum one-hour period, the fairness policy will be disabled and usage will be billed. Additionally, if an account deliberately scrapes a protected website without success and without using our Anti Scraping Protection (ASP), the account may be suspended at the discretion of our account managers.

The following status codes are eligible for our Scrape Failed Protection and Fairness Policy: status codes greater than or equal to 400 and not excluded (see below).

Excluded status codes: 400, 401, 404, 405, 406, 407, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 422, 424, 426, 428, and 456.

Tracking Credit Use

Scrapfly paid plans do not have immediate hard limits that would stop critical scraping tasks from operating. Every Scrapfly user can go over the limit, at which time extra pricing will be applied for each batch of 10,000 extra credits.

To keep an eye on extra credit use the API responses contain a X-Scrapfly-Remaining-Api-Credit header which indicates the remaining credit count where 0 means the account is in extra use mode. You can also retrieve account information (quota, concurrency and so on) via our Account API

Summary