# Scrapfly Documentation

## Table of Contents

### Dashboard

- [Intro](https://scrapfly.io/docs)
- [Project](https://scrapfly.io/docs/project)
- [Account](https://scrapfly.io/docs/account)
- [Workspace & Team](https://scrapfly.io/docs/workspace-and-team)
- [Billing](https://scrapfly.io/docs/billing)

### Products

#### MCP Server

- [Getting Started](https://scrapfly.io/docs/mcp/getting-started)
- [Tools & API Spec](https://scrapfly.io/docs/mcp/tools)
- [Authentication](https://scrapfly.io/docs/mcp/authentication)
- [Examples & Use Cases](https://scrapfly.io/docs/mcp/examples)
- [FAQ](https://scrapfly.io/docs/mcp/faq)
##### Integrations

- [Overview](https://scrapfly.io/docs/mcp/integrations)
- [Claude Desktop](https://scrapfly.io/docs/mcp/integrations/claude-desktop)
- [Claude Code](https://scrapfly.io/docs/mcp/integrations/claude-code)
- [ChatGPT](https://scrapfly.io/docs/mcp/integrations/chatgpt)
- [Cursor](https://scrapfly.io/docs/mcp/integrations/cursor)
- [Cline](https://scrapfly.io/docs/mcp/integrations/cline)
- [Windsurf](https://scrapfly.io/docs/mcp/integrations/windsurf)
- [Zed](https://scrapfly.io/docs/mcp/integrations/zed)
- [Roo Code](https://scrapfly.io/docs/mcp/integrations/roo-code)
- [VS Code](https://scrapfly.io/docs/mcp/integrations/vscode)
- [LangChain](https://scrapfly.io/docs/mcp/integrations/langchain)
- [LlamaIndex](https://scrapfly.io/docs/mcp/integrations/llamaindex)
- [CrewAI](https://scrapfly.io/docs/mcp/integrations/crewai)
- [OpenAI](https://scrapfly.io/docs/mcp/integrations/openai)
- [n8n](https://scrapfly.io/docs/mcp/integrations/n8n)
- [Make](https://scrapfly.io/docs/mcp/integrations/make)
- [Zapier](https://scrapfly.io/docs/mcp/integrations/zapier)
- [Vapi AI](https://scrapfly.io/docs/mcp/integrations/vapi)
- [Agent Builder](https://scrapfly.io/docs/mcp/integrations/agent-builder)
- [Custom Client](https://scrapfly.io/docs/mcp/integrations/custom-client)


#### Web Scraping API

- [Getting Started](https://scrapfly.io/docs/scrape-api/getting-started)
- [API Specification]()
- [Monitoring](https://scrapfly.io/docs/monitoring)
- [Customize Request](https://scrapfly.io/docs/scrape-api/custom)
- [Debug](https://scrapfly.io/docs/scrape-api/debug)
- [Anti Scraping Protection](https://scrapfly.io/docs/scrape-api/anti-scraping-protection)
- [Proxy](https://scrapfly.io/docs/scrape-api/proxy)
- [Proxy Mode](https://scrapfly.io/docs/scrape-api/proxy-mode)
- [Proxy Mode - Screaming Frog](https://scrapfly.io/docs/scrape-api/proxy-mode/screaming-frog)
- [Proxy Mode - Apify](https://scrapfly.io/docs/scrape-api/proxy-mode/apify)
- [(Auto) Data Extraction](https://scrapfly.io/docs/scrape-api/extraction)
- [Javascript Rendering](https://scrapfly.io/docs/scrape-api/javascript-rendering)
- [Javascript Scenario](https://scrapfly.io/docs/scrape-api/javascript-scenario)
- [SSL](https://scrapfly.io/docs/scrape-api/ssl)
- [DNS](https://scrapfly.io/docs/scrape-api/dns)
- [Cache](https://scrapfly.io/docs/scrape-api/cache)
- [Session](https://scrapfly.io/docs/scrape-api/session)
- [Webhook](https://scrapfly.io/docs/scrape-api/webhook)
- [Screenshot](https://scrapfly.io/docs/scrape-api/screenshot)
- [Errors](https://scrapfly.io/docs/scrape-api/errors)
- [Timeout](https://scrapfly.io/docs/scrape-api/understand-timeout)
- [Throttling](https://scrapfly.io/docs/throttling)
- [Troubleshoot](https://scrapfly.io/docs/scrape-api/troubleshoot)
- [Billing](https://scrapfly.io/docs/scrape-api/billing)
- [FAQ](https://scrapfly.io/docs/scrape-api/faq)

#### Crawler API

- [Getting Started](https://scrapfly.io/docs/crawler-api/getting-started)
- [API Specification]()
- [Retrieving Results](https://scrapfly.io/docs/crawler-api/results)
- [WARC Format](https://scrapfly.io/docs/crawler-api/warc-format)
- [Data Extraction](https://scrapfly.io/docs/crawler-api/extraction-rules)
- [Webhook](https://scrapfly.io/docs/crawler-api/webhook)
- [Billing](https://scrapfly.io/docs/crawler-api/billing)
- [Errors](https://scrapfly.io/docs/crawler-api/errors)
- [Troubleshoot](https://scrapfly.io/docs/crawler-api/troubleshoot)
- [FAQ](https://scrapfly.io/docs/crawler-api/faq)

#### Screenshot API

- [Getting Started](https://scrapfly.io/docs/screenshot-api/getting-started)
- [API Specification]()
- [Accessibility Testing](https://scrapfly.io/docs/screenshot-api/accessibility)
- [Webhook](https://scrapfly.io/docs/screenshot-api/webhook)
- [Billing](https://scrapfly.io/docs/screenshot-api/billing)
- [Errors](https://scrapfly.io/docs/screenshot-api/errors)

#### Extraction API

- [Getting Started](https://scrapfly.io/docs/extraction-api/getting-started)
- [API Specification]()
- [Rules Template](https://scrapfly.io/docs/extraction-api/rules-and-template)
- [LLM Extraction](https://scrapfly.io/docs/extraction-api/llm-prompt)
- [AI Auto Extraction](https://scrapfly.io/docs/extraction-api/automatic-ai)
- [Webhook](https://scrapfly.io/docs/extraction-api/webhook)
- [Billing](https://scrapfly.io/docs/extraction-api/billing)
- [Errors](https://scrapfly.io/docs/extraction-api/errors)
- [FAQ](https://scrapfly.io/docs/extraction-api/faq)

#### Proxy Saver

- [Getting Started](https://scrapfly.io/docs/proxy-saver/getting-started)
- [Fingerprints](https://scrapfly.io/docs/proxy-saver/fingerprints)
- [Optimizations](https://scrapfly.io/docs/proxy-saver/optimizations)
- [SSL Certificates](https://scrapfly.io/docs/proxy-saver/certificates)
- [Protocols](https://scrapfly.io/docs/proxy-saver/protocols)
- [Pacfile](https://scrapfly.io/docs/proxy-saver/pacfile)
- [Secure Credentials](https://scrapfly.io/docs/proxy-saver/security)
- [Billing](https://scrapfly.io/docs/proxy-saver/billing)

#### Cloud Browser API

- [Getting Started](https://scrapfly.io/docs/cloud-browser-api/getting-started)
- [Proxy & Geo-Targeting](https://scrapfly.io/docs/cloud-browser-api/proxy)
- [Unblock API](https://scrapfly.io/docs/cloud-browser-api/unblock)
- [File Downloads](https://scrapfly.io/docs/cloud-browser-api/file-downloads)
- [Session Resume](https://scrapfly.io/docs/cloud-browser-api/session-resume)
- [Human-in-the-Loop](https://scrapfly.io/docs/cloud-browser-api/human-in-the-loop)
- [Debug Mode](https://scrapfly.io/docs/cloud-browser-api/debug-mode)
- [Bring Your Own Proxy](https://scrapfly.io/docs/cloud-browser-api/bring-your-own-proxy)
- [Browser Extensions](https://scrapfly.io/docs/cloud-browser-api/extensions)
##### Integrations

- [Puppeteer](https://scrapfly.io/docs/cloud-browser-api/puppeteer)
- [Playwright](https://scrapfly.io/docs/cloud-browser-api/playwright)
- [Selenium](https://scrapfly.io/docs/cloud-browser-api/selenium)
- [Vercel Agent Browser](https://scrapfly.io/docs/cloud-browser-api/agent-browser)
- [Browser Use](https://scrapfly.io/docs/cloud-browser-api/browser-use)
- [Stagehand](https://scrapfly.io/docs/cloud-browser-api/stagehand)

- [Billing](https://scrapfly.io/docs/cloud-browser-api/billing)
- [Errors](https://scrapfly.io/docs/cloud-browser-api/errors)


### Tools

- [Antibot Detector](https://scrapfly.io/docs/tools/antibot-detector)

### SDK

- [Golang](https://scrapfly.io/docs/sdk/golang)
- [Python](https://scrapfly.io/docs/sdk/python)
- [Rust](https://scrapfly.io/docs/sdk/rust)
- [TypeScript](https://scrapfly.io/docs/sdk/typescript)
- [Scrapy](https://scrapfly.io/docs/sdk/scrapy)

### Integrations

- [Getting Started](https://scrapfly.io/docs/integration/getting-started)
- [LangChain](https://scrapfly.io/docs/integration/langchain)
- [LlamaIndex](https://scrapfly.io/docs/integration/llamaindex)
- [CrewAI](https://scrapfly.io/docs/integration/crewai)
- [Zapier](https://scrapfly.io/docs/integration/zapier)
- [Make](https://scrapfly.io/docs/integration/make)
- [n8n](https://scrapfly.io/docs/integration/n8n)

### Academy

- [Overview](https://scrapfly.io/academy)
- [Web Scraping Overview](https://scrapfly.io/academy/scraping-overview)
- [Tools](https://scrapfly.io/academy/tools-overview)
- [Reverse Engineering](https://scrapfly.io/academy/reverse-engineering)
- [Static Scraping](https://scrapfly.io/academy/static-scraping)
- [HTML Parsing](https://scrapfly.io/academy/html-parsing)
- [Dynamic Scraping](https://scrapfly.io/academy/dynamic-scraping)
- [Hidden API Scraping](https://scrapfly.io/academy/hidden-api-scraping)
- [Headless Browsers](https://scrapfly.io/academy/headless-browsers)
- [Hidden Web Data](https://scrapfly.io/academy/hidden-web-data)
- [JSON Parsing](https://scrapfly.io/academy/json-parsing)
- [Data Processing](https://scrapfly.io/academy/data-processing)
- [Scaling](https://scrapfly.io/academy/scaling)
- [Walkthrough Summary](https://scrapfly.io/academy/walkthrough-summary)
- [Scraper Blocking](https://scrapfly.io/academy/scraper-blocking)
- [Proxies](https://scrapfly.io/academy/proxies)

---

#   Crawler API Billing 

 [  View as markdown ](https://scrapfly.io/?view=markdown)   Copy for LLM    Copy for LLM  [     Open in ChatGPT ](https://chatgpt.com/?hints=search&prompt=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fcrawler-api%2Fbilling%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Claude ](https://claude.ai/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fcrawler-api%2Fbilling%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Perplexity ](https://www.perplexity.ai/search/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fcrawler-api%2Fbilling%20so%20I%20can%20ask%20questions%20about%20it.) 

 

 

 The Crawler API billing is simple and transparent: **crawler cost = sum of all Web Scraping API calls made during the crawl**.

> **Key Concept:** Each page crawled is billed as a Web Scraping API request based on your enabled features. For general billing policy, see [Billing Policy &amp; Overview](https://scrapfly.io/docs/billing).

 

##   How It Works 

##### Total Crawler Cost Calculation

 Each page crawled is billed as a Web Scraping API request based on your enabled features.

 

 `Total = Pages × Cost/Page` 

 

 

 

 

    Pricing Table Feature costs breakdown    Cost Visualization Visual flow diagram  

  | Feature | API Credits Cost |
|---|---|
| Web Scraping (see [Web Scraping API Billing](https://scrapfly.io/docs/scrape-api/billing)) |
| **Base Request Cost** (choose one proxy pool) |  |
| └ [Datacenter proxy](https://scrapfly.io/docs/scrape-api/proxy) Default | 1 credit |
| └ [Residential proxy](https://scrapfly.io/docs/scrape-api/proxy) `proxy_pool=public_residential_pool` | 25 credits |
| **Additional Features** (added to base cost) |  |
| └ [Browser rendering](https://scrapfly.io/docs/scrape-api/javascript-rendering) `render_js=true` | +5 credits |
| Content Extraction (see [Extraction API Billing](https://scrapfly.io/docs/extraction-api/billing)) |
| └ [Extraction Template](https://scrapfly.io/docs/extraction-api/rules-and-template) | +1 credit |
| └ [Extraction Prompt](https://scrapfly.io/docs/extraction-api/llm-prompt) AI-powered | +5 credits |
| └ [Extraction Model](https://scrapfly.io/docs/extraction-api/automatic-ai) AI-powered | +5 credits |
| > **Note:** Proxy pool costs are mutually exclusive - choose either datacenter (1 credit) or residential (25 credits) as your base cost. ASP (Anti-Scraping Protection) may dynamically upgrade the proxy pool to bypass anti-bot protection, which can affect the final cost. |

 



  Each step adds to your total cost per page - choose only what you need to optimize spending

 

 

  **Learn More:** For detailed pricing rules and cost breakdown, see the [Web Scraping API Billing documentation](https://scrapfly.io/docs/scrape-api/billing). 

##   Cost Examples 

 Here are a few examples showing how crawler costs are calculated. Remember, each page follows the same billing rules as the Web Scraping API.

#####   Example 1: Basic Crawl 

 

**Configuration:**

 ```
{
  "url": "https://web-scraping.dev",
  "page_limit": 100,
  "asp": false
}
```

 

   

 

 

> **Cost:** 100 pages × 1 credit = **100 credits** 
> (Datacenter proxy, no browser rendering)

 

 

#####   Example 2: Crawl with ASP 

 

**Configuration:**

 ```
{
  "url": "https://web-scraping.dev",
  "page_limit": 100,
  "asp": true
}
```

 

   

 

 

> **Cost:** 100 pages × (base cost + ASP cost) 
> See [Web Scraping API pricing](https://scrapfly.io/docs/scrape-api/billing) for ASP costs

 

 

#####   Example 3: Residential Proxies 

 

**Configuration:**

 ```
{
  "url": "https://web-scraping.dev",
  "page_limit": 100,
  "proxy_pool": "public_residential_pool"
}
```

 

   

 

 

> **Cost:** 100 pages × 25 credits = **2500 credits** 
> (Residential proxy, no browser rendering)

 

 

#####   Example 4: Full Features 

 

**Configuration:**

 ```
{
  "url": "https://web-scraping.dev",
  "page_limit": 50,
  "render_js": true,
  "extraction_model": "product"
}
```

 

   

 

 

 **Cost:** 50 pages × (1 + 5 + 5) = **550 credits** 
(Datacenter + Browser + AI Extraction) 

 

 

  **Calculate Your Costs:** For exact pricing per feature, visit the [Web Scraping API Billing page](https://scrapfly.io/docs/scrape-api/billing) or check the [pricing page](https://scrapfly.io/pricing). 

##   Cost Control 

#####   Request-Level Limits 

 

Control costs by setting hard limits on your crawl:

- `page_limit` - Limit total pages crawled
- `max_duration` - Limit crawl duration in seconds
- `max_api_credit` - Stop crawl when credit limit is reached
 
 ```
{
  "url": "https://web-scraping.dev",
  "page_limit": 500,
  "max_duration": 1800,
  "max_api_credit": 3000
}
```

 

   

 

 

 

 

 

#####   Project Budget Limits 

 

 Set crawler-specific budget limits in your [project settings](https://scrapfly.io/docs/project) to prevent unexpected costs:

- **Monthly crawler credit limit** 
    Cap total spending per month
- **Per-job credit limit** 
    Prevent runaway jobs
- **Automatic alerts** 
    Get notified when approaching limits
 
 

 

 

 

##   Cost Optimization Tips 

 Since each page is billed like a Web Scraping API call, you can reduce costs by:

#####    **1. Crawl Only What You Need**   

 

- **Use path filtering:** `include_only_paths` and `exclude_paths` 
    Only crawl relevant sections of the site
- **Set page limits:** `page_limit` to cap total pages 
    Prevent unexpected crawl expansion
- **Limit depth:** `max_depth` to focus on nearby pages 
    Avoid deep crawling when not needed
- **Set budget limits:** `max_api_credit` to stop when budget is reached 
    Hard cap on spending per job
 
 

 

 

#####    **2. Use Caching**   

 

Enable caching to avoid re-scraping unchanged pages:

 ```
{
  "url": "https://web-scraping.dev",
  "cache": true,
  "cache_ttl": 86400
}
```

 

   

 

 

> **Savings:** Cached pages cost **0 credits** when hit within TTL period.

 

 

 

#####    **3. Choose the Right Features**   

 

- **[Browser rendering](https://scrapfly.io/docs/scrape-api/javascript-rendering):** Only enable `render_js=true` if the site requires JavaScript 
    Adds +5 credits per page
- **[ASP (Anti-Scraping Protection)](https://scrapfly.io/docs/scrape-api/anti-scraping-protection):** Only enable if the site has anti-bot protection 
    May upgrade proxy pool automatically - see [ASP documentation](https://scrapfly.io/docs/scrape-api/anti-scraping-protection) for details
- **[Proxy pool](https://scrapfly.io/docs/scrape-api/proxy):** Use datacenter by default (1 credit), switch to residential only when needed 
    Residential proxies are 25x more expensive (25 credits)
- **[Content extraction](https://scrapfly.io/docs/extraction-api/billing):** Use AI-powered extraction sparingly 
    Template: +1 credit, Prompt/Model: +5 credits each - see [Extraction API billing](https://scrapfly.io/docs/extraction-api/billing)
 
 

 

 

 

> **Learn More:** For detailed cost optimization strategies, see [Web Scraping API Cost Optimization](https://scrapfly.io/docs/scrape-api/billing#optimization)

##   Billing FAQ 

   **Does pausing a crawler stop billing?**  

 **Yes.** When you pause a crawler, no new pages are crawled and no new credits are consumed.

 

 

 

   **Are duplicate URLs counted?**  

 **No.** The crawler automatically deduplicates URLs. Each unique URL is only crawled once per job.

 

 

 

   **How are robots.txt requests billed?**  

 Robots.txt and sitemap.xml requests are **free** and do not consume credits.

 

 

 

   **What happens if I exceed my budget limit?**  

 The crawler automatically stops when `max_api_credit` is reached. You can resume it by increasing the limit.

 

 

 

   **Can I get a refund for a failed crawl?**  

 Failed crawls (system errors) are automatically not billed. For other issues, contact [support](https://scrapfly.io/docs/support).

 

 

 

 

##   Related Documentation 

 [  **Web Scraping API Billing** 
Detailed cost breakdown for scraping features 

  ](https://scrapfly.io/docs/scrape-api/billing) [  **Extraction API Billing** 
AI extraction costs and options 

  ](https://scrapfly.io/docs/extraction-api/billing) [  **Account Billing &amp; Subscriptions** 
Payment policy, invoices, and plan management 

  ](https://scrapfly.io/docs/billing) [  **Project Budget Management** 
Set spending limits and alerts 

  ](https://scrapfly.io/docs/project) [  **Pricing Plans** 
Compare plans and features 

  ](https://scrapfly.io/pricing)