Crawler API Billing
The Crawler API billing is simple: crawler cost = sum of all Web Scraping API calls made during the crawl.
How It Works
Each page crawled is billed as a Web Scraping API request based on your enabled features.
Total crawler cost = Number of pages crawled × Cost per page
| Feature | API Credits Cost |
|---|---|
| Web Scraping (see Web Scraping API Billing) | |
| Base Request Cost (choose one proxy pool) | |
| └ Datacenter proxy (default) | 1 |
└ Residential proxy (proxy_pool=public_residential_pool) |
25 |
| Additional Features (added to base cost) | |
└ Browser rendering (render_js=true) |
+5 |
| Content Extraction (see Extraction API Billing) | |
| └ Extraction Template | +1 |
| └ Extraction Prompt (AI-powered) | +5 |
| └ Extraction Model (AI-powered) | +5 |
| Note: Proxy pool costs are mutually exclusive - choose either datacenter (1 credit) or residential (25 credits) as your base cost. ASP (Anti-Scraping Protection) may dynamically upgrade the proxy pool to bypass anti-bot protection, which can affect the final cost. | |
For detailed pricing rules and cost breakdown, see the Web Scraping API Billing documentation.
Cost Examples
Here are a few examples showing how crawler costs are calculated. Remember, each page follows the same billing rules as the Web Scraping API.
Example 1: Basic Crawl (100 pages, no ASP)
Cost: 100 pages × base cost per page = see Web Scraping API pricing
Example 2: Crawl with ASP (100 pages)
Cost: 100 pages × (base cost + ASP cost) = see Web Scraping API pricing
Example 3: Crawl with Residential Proxies (100 pages)
Cost: 100 pages × residential proxy base cost (25 credits per page) = 2500 credits total
For exact pricing per feature, visit the Web Scraping API Billing page or check the pricing page.
Cost Control
Set Budget Limits
Control costs by setting hard limits on your crawl:
max_pages- Limit total pages crawledmax_duration- Limit crawl duration in secondsmax_api_credit_cost- Stop crawl when credit limit is reached
Project Budget Limits
Set crawler-specific budget limits in your project settings to prevent unexpected costs:
- Monthly crawler credit limit
- Per-job credit limit
- Automatic alerts when approaching limits
Cost Optimization Tips
Since each page is billed like a Web Scraping API call, you can reduce costs by:
1. Crawl Only What You Need
- Use path filtering:
include_only_pathsandexclude_paths - Set page limits:
max_pagesto cap total pages - Limit depth:
max_depthto focus on nearby pages - Set budget limits:
max_api_creditto stop when budget is reached
2. Use Caching
Enable caching to avoid re-scraping unchanged pages:
Cached pages cost 0 credits when hit within TTL period.
3. Choose the Right Features
- Browser rendering: Only enable
render_js=trueif the site requires JavaScript (adds +5 credits) - ASP: Only enable if the site has anti-bot protection (may upgrade proxy pool)
- Proxy pool: Use datacenter by default (1 credit), switch to residential only when needed (25 credits total - 25x more expensive)
- Content extraction: Use AI-powered extraction sparingly (Template: +1, Prompt/Model: +5 credits each)
For detailed cost optimization strategies, see: Web Scraping API Cost Optimization
Billing FAQ
Q: Does pausing a crawler stop billing?
Yes. When you pause a crawler, no new pages are crawled and no new credits are consumed.
Q: Are duplicate URLs counted?
No. The crawler automatically deduplicates URLs. Each unique URL is only crawled once per job.
Q: How are robots.txt requests billed?
Robots.txt and sitemap.xml requests are free and do not consume credits.
Q: What happens if I exceed my budget limit?
The crawler automatically stops when max_api_credit_cost is reached.
You can resume it by increasing the limit.
Q: Can I get a refund for a failed crawl?
Failed crawls (system errors) are automatically not billed. For other issues, contact support.