Crawler API Billing

The Crawler API billing is simple and transparent: crawler cost = sum of all Web Scraping API calls made during the crawl.

Key Concept: Each page crawled is billed as a Web Scraping API request based on your enabled features. For general billing policy, see Billing Policy & Overview.

How It Works

Total Crawler Cost Calculation

Each page crawled is billed as a Web Scraping API request based on your enabled features.

Total = Pages × Cost/Page
Feature API Credits Cost
Web Scraping (see Web Scraping API Billing)
Base Request Cost (choose one proxy pool)
Datacenter proxy Default 1 credit
Residential proxy proxy_pool=public_residential_pool 25 credits
Additional Features (added to base cost)
Browser rendering render_js=true +5 credits
Content Extraction (see Extraction API Billing)
Extraction Template +1 credit
Extraction Prompt AI-powered +5 credits
Extraction Model AI-powered +5 credits
Note: Proxy pool costs are mutually exclusive - choose either datacenter (1 credit) or residential (25 credits) as your base cost. ASP (Anti-Scraping Protection) may dynamically upgrade the proxy pool to bypass anti-bot protection, which can affect the final cost.
Crawler API Cost Flow Diagram

Each step adds to your total cost per page - choose only what you need to optimize spending

Learn More: For detailed pricing rules and cost breakdown, see the Web Scraping API Billing documentation.

Cost Examples

Here are a few examples showing how crawler costs are calculated. Remember, each page follows the same billing rules as the Web Scraping API.

Example 1: Basic Crawl

Configuration:

Cost: 100 pages × 1 credit = 100 credits
(Datacenter proxy, no browser rendering)
Example 2: Crawl with ASP

Configuration:

Cost: 100 pages × (base cost + ASP cost)
See Web Scraping API pricing for ASP costs
Example 3: Residential Proxies

Configuration:

Cost: 100 pages × 25 credits = 2500 credits
(Residential proxy, no browser rendering)
Example 4: Full Features

Configuration:

Cost: 50 pages × (1 + 5 + 5) = 550 credits
(Datacenter + Browser + AI Extraction)
Calculate Your Costs: For exact pricing per feature, visit the Web Scraping API Billing page or check the pricing page.

Cost Control

Request-Level Limits

Control costs by setting hard limits on your crawl:

  • max_pages - Limit total pages crawled
  • max_duration - Limit crawl duration in seconds
  • max_api_credit_cost - Stop crawl when credit limit is reached
Project Budget Limits

Set crawler-specific budget limits in your project settings to prevent unexpected costs:

  • Monthly crawler credit limit
    Cap total spending per month
  • Per-job credit limit
    Prevent runaway jobs
  • Automatic alerts
    Get notified when approaching limits

Cost Optimization Tips

Since each page is billed like a Web Scraping API call, you can reduce costs by:

  • Use path filtering: include_only_paths and exclude_paths
    Only crawl relevant sections of the site
  • Set page limits: max_pages to cap total pages
    Prevent unexpected crawl expansion
  • Limit depth: max_depth to focus on nearby pages
    Avoid deep crawling when not needed
  • Set budget limits: max_api_credit_cost to stop when budget is reached
    Hard cap on spending per job

Enable caching to avoid re-scraping unchanged pages:

Savings: Cached pages cost 0 credits when hit within TTL period.
Learn More: For detailed cost optimization strategies, see Web Scraping API Cost Optimization

Billing FAQ

Yes. When you pause a crawler, no new pages are crawled and no new credits are consumed.

No. The crawler automatically deduplicates URLs. Each unique URL is only crawled once per job.

Robots.txt and sitemap.xml requests are free and do not consume credits.

The crawler automatically stops when max_api_credit_cost is reached. You can resume it by increasing the limit.

Failed crawls (system errors) are automatically not billed. For other issues, contact support.

Summary