Crawler API Billing
The Crawler API billing is simple and transparent: crawler cost = sum of all Web Scraping API calls made during the crawl.
How It Works
Total Crawler Cost Calculation
Each page crawled is billed as a Web Scraping API request based on your enabled features.
Total = Pages × Cost/Page
| Feature | API Credits Cost |
|---|---|
| Web Scraping (see Web Scraping API Billing) | |
| Base Request Cost (choose one proxy pool) | |
| └ Datacenter proxy Default | 1 credit |
└ Residential proxy
proxy_pool=public_residential_pool
|
25 credits |
| Additional Features (added to base cost) | |
└ Browser rendering
render_js=true
|
+5 credits |
| Content Extraction (see Extraction API Billing) | |
| └ Extraction Template | +1 credit |
| └ Extraction Prompt AI-powered | +5 credits |
| └ Extraction Model AI-powered | +5 credits |
|
Note: Proxy pool costs are mutually exclusive - choose either datacenter (1 credit) or residential (25 credits) as your base cost.
ASP (Anti-Scraping Protection) may dynamically upgrade the proxy pool to bypass anti-bot protection, which can affect the final cost.
|
|
Each step adds to your total cost per page - choose only what you need to optimize spending
Cost Examples
Here are a few examples showing how crawler costs are calculated. Remember, each page follows the same billing rules as the Web Scraping API.
Example 1: Basic Crawl
Configuration:
(Datacenter proxy, no browser rendering)
Example 2: Crawl with ASP
Example 3: Residential Proxies
Configuration:
(Residential proxy, no browser rendering)
Example 4: Full Features
Configuration:
(Datacenter + Browser + AI Extraction)
Cost Control
Request-Level Limits
Control costs by setting hard limits on your crawl:
max_pages- Limit total pages crawledmax_duration- Limit crawl duration in secondsmax_api_credit_cost- Stop crawl when credit limit is reached
Project Budget Limits
Set crawler-specific budget limits in your project settings to prevent unexpected costs:
-
Monthly crawler credit limit
Cap total spending per month -
Per-job credit limit
Prevent runaway jobs -
Automatic alerts
Get notified when approaching limits
Cost Optimization Tips
Since each page is billed like a Web Scraping API call, you can reduce costs by:
-
Use path filtering:
include_only_pathsandexclude_paths
Only crawl relevant sections of the site -
Set page limits:
max_pagesto cap total pages
Prevent unexpected crawl expansion -
Limit depth:
max_depthto focus on nearby pages
Avoid deep crawling when not needed -
Set budget limits:
max_api_credit_costto stop when budget is reached
Hard cap on spending per job
Enable caching to avoid re-scraping unchanged pages:
-
Browser rendering: Only enable
render_js=trueif the site requires JavaScript
Adds +5 credits per page -
ASP (Anti-Scraping Protection): Only enable if the site has anti-bot protection
May upgrade proxy pool automatically - see ASP documentation for details -
Proxy pool: Use datacenter by default (1 credit), switch to residential only when needed
Residential proxies are 25x more expensive (25 credits) -
Content extraction: Use AI-powered extraction sparingly
Template: +1 credit, Prompt/Model: +5 credits each - see Extraction API billing
Billing FAQ
Yes. When you pause a crawler, no new pages are crawled and no new credits are consumed.
No. The crawler automatically deduplicates URLs. Each unique URL is only crawled once per job.
Robots.txt and sitemap.xml requests are free and do not consume credits.
The crawler automatically stops when max_api_credit_cost is reached.
You can resume it by increasing the limit.
Failed crawls (system errors) are automatically not billed. For other issues, contact support.
Related Documentation
Detailed cost breakdown for scraping features
AI extraction costs and options
Payment policy, invoices, and plan management
Set spending limits and alerts
Compare plans and features