# Scrapfly Documentation

## Table of Contents

### Dashboard

- [Intro](https://scrapfly.io/docs)
- [Project](https://scrapfly.io/docs/project)
- [Account](https://scrapfly.io/docs/account)
- [Workspace & Team](https://scrapfly.io/docs/workspace-and-team)
- [Billing](https://scrapfly.io/docs/billing)

### Products

#### MCP Server

- [Getting Started](https://scrapfly.io/docs/mcp/getting-started)
- [Tools & API Spec](https://scrapfly.io/docs/mcp/tools)
- [Authentication](https://scrapfly.io/docs/mcp/authentication)
- [Examples & Use Cases](https://scrapfly.io/docs/mcp/examples)
- [FAQ](https://scrapfly.io/docs/mcp/faq)
##### Integrations

- [Overview](https://scrapfly.io/docs/mcp/integrations)
- [Claude Desktop](https://scrapfly.io/docs/mcp/integrations/claude-desktop)
- [Claude Code](https://scrapfly.io/docs/mcp/integrations/claude-code)
- [ChatGPT](https://scrapfly.io/docs/mcp/integrations/chatgpt)
- [Cursor](https://scrapfly.io/docs/mcp/integrations/cursor)
- [Cline](https://scrapfly.io/docs/mcp/integrations/cline)
- [Windsurf](https://scrapfly.io/docs/mcp/integrations/windsurf)
- [Zed](https://scrapfly.io/docs/mcp/integrations/zed)
- [Roo Code](https://scrapfly.io/docs/mcp/integrations/roo-code)
- [VS Code](https://scrapfly.io/docs/mcp/integrations/vscode)
- [LangChain](https://scrapfly.io/docs/mcp/integrations/langchain)
- [LlamaIndex](https://scrapfly.io/docs/mcp/integrations/llamaindex)
- [CrewAI](https://scrapfly.io/docs/mcp/integrations/crewai)
- [OpenAI](https://scrapfly.io/docs/mcp/integrations/openai)
- [n8n](https://scrapfly.io/docs/mcp/integrations/n8n)
- [Make](https://scrapfly.io/docs/mcp/integrations/make)
- [Zapier](https://scrapfly.io/docs/mcp/integrations/zapier)
- [Vapi AI](https://scrapfly.io/docs/mcp/integrations/vapi)
- [Agent Builder](https://scrapfly.io/docs/mcp/integrations/agent-builder)
- [Custom Client](https://scrapfly.io/docs/mcp/integrations/custom-client)


#### Web Scraping API

- [Getting Started](https://scrapfly.io/docs/scrape-api/getting-started)
- [API Specification]()
- [Monitoring](https://scrapfly.io/docs/monitoring)
- [Customize Request](https://scrapfly.io/docs/scrape-api/custom)
- [Debug](https://scrapfly.io/docs/scrape-api/debug)
- [Anti Scraping Protection](https://scrapfly.io/docs/scrape-api/anti-scraping-protection)
- [Proxy](https://scrapfly.io/docs/scrape-api/proxy)
- [Proxy Mode](https://scrapfly.io/docs/scrape-api/proxy-mode)
- [Proxy Mode - Screaming Frog](https://scrapfly.io/docs/scrape-api/proxy-mode/screaming-frog)
- [Proxy Mode - Apify](https://scrapfly.io/docs/scrape-api/proxy-mode/apify)
- [(Auto) Data Extraction](https://scrapfly.io/docs/scrape-api/extraction)
- [Javascript Rendering](https://scrapfly.io/docs/scrape-api/javascript-rendering)
- [Javascript Scenario](https://scrapfly.io/docs/scrape-api/javascript-scenario)
- [SSL](https://scrapfly.io/docs/scrape-api/ssl)
- [DNS](https://scrapfly.io/docs/scrape-api/dns)
- [Cache](https://scrapfly.io/docs/scrape-api/cache)
- [Session](https://scrapfly.io/docs/scrape-api/session)
- [Webhook](https://scrapfly.io/docs/scrape-api/webhook)
- [Screenshot](https://scrapfly.io/docs/scrape-api/screenshot)
- [Errors](https://scrapfly.io/docs/scrape-api/errors)
- [Timeout](https://scrapfly.io/docs/scrape-api/understand-timeout)
- [Throttling](https://scrapfly.io/docs/throttling)
- [Troubleshoot](https://scrapfly.io/docs/scrape-api/troubleshoot)
- [Billing](https://scrapfly.io/docs/scrape-api/billing)
- [FAQ](https://scrapfly.io/docs/scrape-api/faq)

#### Crawler API

- [Getting Started](https://scrapfly.io/docs/crawler-api/getting-started)
- [API Specification]()
- [Retrieving Results](https://scrapfly.io/docs/crawler-api/results)
- [WARC Format](https://scrapfly.io/docs/crawler-api/warc-format)
- [Data Extraction](https://scrapfly.io/docs/crawler-api/extraction-rules)
- [Webhook](https://scrapfly.io/docs/crawler-api/webhook)
- [Billing](https://scrapfly.io/docs/crawler-api/billing)
- [Errors](https://scrapfly.io/docs/crawler-api/errors)
- [Troubleshoot](https://scrapfly.io/docs/crawler-api/troubleshoot)
- [FAQ](https://scrapfly.io/docs/crawler-api/faq)

#### Screenshot API

- [Getting Started](https://scrapfly.io/docs/screenshot-api/getting-started)
- [API Specification]()
- [Accessibility Testing](https://scrapfly.io/docs/screenshot-api/accessibility)
- [Webhook](https://scrapfly.io/docs/screenshot-api/webhook)
- [Billing](https://scrapfly.io/docs/screenshot-api/billing)
- [Errors](https://scrapfly.io/docs/screenshot-api/errors)

#### Extraction API

- [Getting Started](https://scrapfly.io/docs/extraction-api/getting-started)
- [API Specification]()
- [Rules Template](https://scrapfly.io/docs/extraction-api/rules-and-template)
- [LLM Extraction](https://scrapfly.io/docs/extraction-api/llm-prompt)
- [AI Auto Extraction](https://scrapfly.io/docs/extraction-api/automatic-ai)
- [Webhook](https://scrapfly.io/docs/extraction-api/webhook)
- [Billing](https://scrapfly.io/docs/extraction-api/billing)
- [Errors](https://scrapfly.io/docs/extraction-api/errors)
- [FAQ](https://scrapfly.io/docs/extraction-api/faq)

#### Proxy Saver

- [Getting Started](https://scrapfly.io/docs/proxy-saver/getting-started)
- [Fingerprints](https://scrapfly.io/docs/proxy-saver/fingerprints)
- [Optimizations](https://scrapfly.io/docs/proxy-saver/optimizations)
- [SSL Certificates](https://scrapfly.io/docs/proxy-saver/certificates)
- [Protocols](https://scrapfly.io/docs/proxy-saver/protocols)
- [Pacfile](https://scrapfly.io/docs/proxy-saver/pacfile)
- [Secure Credentials](https://scrapfly.io/docs/proxy-saver/security)
- [Billing](https://scrapfly.io/docs/proxy-saver/billing)

#### Cloud Browser API

- [Getting Started](https://scrapfly.io/docs/cloud-browser-api/getting-started)
- [Proxy & Geo-Targeting](https://scrapfly.io/docs/cloud-browser-api/proxy)
- [Unblock API](https://scrapfly.io/docs/cloud-browser-api/unblock)
- [File Downloads](https://scrapfly.io/docs/cloud-browser-api/file-downloads)
- [Session Resume](https://scrapfly.io/docs/cloud-browser-api/session-resume)
- [Human-in-the-Loop](https://scrapfly.io/docs/cloud-browser-api/human-in-the-loop)
- [Debug Mode](https://scrapfly.io/docs/cloud-browser-api/debug-mode)
- [Bring Your Own Proxy](https://scrapfly.io/docs/cloud-browser-api/bring-your-own-proxy)
- [Browser Extensions](https://scrapfly.io/docs/cloud-browser-api/extensions)
- [Native Browser MCP](https://scrapfly.io/docs/cloud-browser-api/mcp)
- [DevTools Protocol](https://scrapfly.io/docs/cloud-browser-api/cdp-reference)
##### Integrations

- [Puppeteer](https://scrapfly.io/docs/cloud-browser-api/puppeteer)
- [Playwright](https://scrapfly.io/docs/cloud-browser-api/playwright)
- [Selenium](https://scrapfly.io/docs/cloud-browser-api/selenium)
- [Vercel Agent Browser](https://scrapfly.io/docs/cloud-browser-api/agent-browser)
- [Browser Use](https://scrapfly.io/docs/cloud-browser-api/browser-use)
- [Stagehand](https://scrapfly.io/docs/cloud-browser-api/stagehand)

- [Billing](https://scrapfly.io/docs/cloud-browser-api/billing)
- [Errors](https://scrapfly.io/docs/cloud-browser-api/errors)


### Tools

- [Antibot Detector](https://scrapfly.io/docs/tools/antibot-detector)

### SDK

- [Golang](https://scrapfly.io/docs/sdk/golang)
- [Python](https://scrapfly.io/docs/sdk/python)
- [Rust](https://scrapfly.io/docs/sdk/rust)
- [TypeScript](https://scrapfly.io/docs/sdk/typescript)
- [Scrapy](https://scrapfly.io/docs/sdk/scrapy)

### Integrations

- [Getting Started](https://scrapfly.io/docs/integration/getting-started)
- [LangChain](https://scrapfly.io/docs/integration/langchain)
- [LlamaIndex](https://scrapfly.io/docs/integration/llamaindex)
- [CrewAI](https://scrapfly.io/docs/integration/crewai)
- [Zapier](https://scrapfly.io/docs/integration/zapier)
- [Make](https://scrapfly.io/docs/integration/make)
- [n8n](https://scrapfly.io/docs/integration/n8n)

### Academy

- [Overview](https://scrapfly.io/academy)
- [Web Scraping Overview](https://scrapfly.io/academy/scraping-overview)
- [Tools](https://scrapfly.io/academy/tools-overview)
- [Reverse Engineering](https://scrapfly.io/academy/reverse-engineering)
- [Static Scraping](https://scrapfly.io/academy/static-scraping)
- [HTML Parsing](https://scrapfly.io/academy/html-parsing)
- [Dynamic Scraping](https://scrapfly.io/academy/dynamic-scraping)
- [Hidden API Scraping](https://scrapfly.io/academy/hidden-api-scraping)
- [Headless Browsers](https://scrapfly.io/academy/headless-browsers)
- [Hidden Web Data](https://scrapfly.io/academy/hidden-web-data)
- [JSON Parsing](https://scrapfly.io/academy/json-parsing)
- [Data Processing](https://scrapfly.io/academy/data-processing)
- [Scaling](https://scrapfly.io/academy/scaling)
- [Walkthrough Summary](https://scrapfly.io/academy/walkthrough-summary)
- [Scraper Blocking](https://scrapfly.io/academy/scraper-blocking)
- [Proxies](https://scrapfly.io/academy/proxies)

---

# LLM Extraction

 [  View as markdown ](https://scrapfly.io/?view=markdown)   Copy for LLM    Copy for LLM  [     Open in ChatGPT ](https://chatgpt.com/?hints=search&prompt=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fextraction-api%2Fllm-prompt%3Flanguage%3Druby%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Claude ](https://claude.ai/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fextraction-api%2Fllm-prompt%3Flanguage%3Druby%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Perplexity ](https://www.perplexity.ai/search/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fextraction-api%2Fllm-prompt%3Flanguage%3Druby%20so%20I%20can%20ask%20questions%20about%20it.) 

 

 

 Harness the power of natural language processing to seamlessly extract data from any website. Our advanced models simplify the extraction process by handling technical complexities such as chunking large documents, tokenization, and other NLP tasks. This allows you to focus on what truly matters while we take care of the heavy lifting.

 Minimal API call is a `POST` request with `key` and `extraction_prompt` parameters:

 ```
https://api.scrapfly.io/extraction?key=&extraction_prompt=<prompt></prompt>
```

 

   

 

 

 

#### Benefits

- Ease of Use: No need to worry about technical details; our models manage everything for you.
- Versatile Content Types: Supports various text content types including `text, html, xml, markdown, json, rss, xml, csv`. We plan to support additional content types like `application/pdf` in the future.
 
## Usage

1. **Retrieve your content:** When using extraction API, you already have the content. For the example we will use the result data structure of the extracted data example from the prompt on the url <https://web-scraping.dev/product/1> and save it's content to the current directory where you will run the curl command below as `product.html` ```
    curl <span class="snippet-var" data-default="https://web-scraping.dev/product/1" data-var="download_url" title="Click to edit: Target URL">{download_url}</span> -o <span class="snippet-var" data-default="product.html" data-var="file_name" title="Click to edit: Local file name">{file_name}</span>
    ```
2. **Prepare your prompt:** ```
    Extract the product specification in json format
    ```
3. **Call the extraction API:** with the your prompt [urlencoded ](https://scrapfly.io/web-scraping-tools/urlencode) `extraction_prompt=Extract%20the%20product%20specification%20in%20json%20format` ```
    curl -X POST \
    -H "content-type: text/html" \
    "https://api.scrapfly.io/extraction?key=&url=<span class="snippet-var" data-default="https%3A%2F%2Fweb-scraping.dev%2Fproduct%2F1" data-var="target_url" title="Click to edit: URL encoded target URL">{target_url}</span>&extraction_prompt=Extract%20the%20product%20specification%20in%20json%20format" \
    -d @<span class="snippet-var" data-default="product.html" data-var="file_name" title="Click to edit: Local HTML file">{file_name}</span>
    
    ```
    
     
    
       
    
     
    
     
    
      Command Explanation
    - **`curl -X POST`**: 
        - `curl` is a command-line tool for transferring data with URLs.
        - `-X POST` specifies the HTTP method to be used, which is POST in this case.
    - **`-H "content-type: text/html"`**: 
        - `-H` is used to specify an HTTP header for the request.
        - `"content-type: text/html"` sets the Content-Type header to `text/html`, indicating that the data being sent is HTML.
    - **URL**: 
        - The URL of the API endpoint being accessed, including query parameters for authentication and specifying the target URL and extraction prompt.
        - [ `key`: ](https://scrapfly.io/docs/extraction-api/getting-started#api_param_key) An API key for authentication.
        - [ `url`: ](https://scrapfly.io/docs/extraction-api/getting-started#api_param_url) The URL of the web page to be scraped, [URL-Encoded ](https://scrapfly.io/web-scraping-tools/urlencode).
        - [ `extraction_prompt`: ](https://scrapfly.io/docs/extraction-api/getting-started#api_param_extraction_prompt) A prompt specifying what to extract, in this case, "Retrieve the latest reviews in a JSON format".
    - **`-d @product.html`**: 
        - `-d` is used to specify the data to be sent in the POST request body.
        - `@product.html` indicates that the data should be read from a file named `product.html`.
4. The result  ```
    {
        "content_type": "application/json",
        "data": {
            "description": "Indulge your sweet tooth with our Box of Chocolate Candy. Each box contains an assortment of rich, flavorful chocolates with a smooth, creamy filling. Choose from a variety of flavors including zesty orange and sweet cherry. Whether you're looking for the perfect gift or just want to treat yourself, our Box of Chocolate Candy is sure to satisfy.",
            "features": {
                "brand": "ChocoDelight",
                "care instructions": "Store in a cool, dry place",
                "flavors": "Available in Orange and Cherry flavors",
                "material": "Premium quality chocolate",
                "purpose": "Ideal for gifting or self-indulgence",
                "sizes": "Available in small, medium, and large boxes"
            },
            "packs": [
                {
                    "deliveryType": "1 Day shipping",
                    "packageDimension": "100x230 cm",
                    "packageWeight": "1,00 kg",
                    "variants": "6 available",
                    "version": "Pack 1"
                },
                {
                    "deliveryType": "1 Day shipping",
                    "packageDimension": "200x460 cm",
                    "packageWeight": "2,11 kg",
                    "variants": "6 available",
                    "version": "Pack 2"
                },
                {
                    "deliveryType": "1 Day shipping",
                    "packageDimension": "300x690 cm",
                    "packageWeight": "3,22 kg",
                    "variants": "6 available",
                    "version": "Pack 3"
                },
                {
                    "deliveryType": "1 Day shipping",
                    "packageDimension": "400x920 cm",
                    "packageWeight": "4,33 kg",
                    "variants": "6 available",
                    "version": "Pack 4"
                },
                {
                    "deliveryType": "1 Day shipping",
                    "packageDimension": "500x1150 cm",
                    "packageWeight": "5,44 kg",
                    "variants": "6 available",
                    "version": "Pack 5"
                }
            ],
            "price": "9.99 from 12.99",
            "variants": [
                "orange, small",
                "orange, medium",
                "orange, large",
                "cherry, small",
                "cherry, medium",
                "cherry, large"
            ]
        }
    }
    ```
    
     
    
       
    
     
    
     
    
      *You can change the returned data format using the `content_type` parameter. 
     
     When the content type is set to `application/json` scrapfly returns a json object without re-encoding it (json text in json object) for simplicity of usage.*
 
> If you are receiving the error [ERR::EXTRACTION::DATA\_ERROR](https://scrapfly.io/docs/extraction-api/error/ERR::EXTRACTION::DATA_ERROR) make sure to read the description provided with the error code, when the LLM is not able to extract the data you are asking for, it explains the reason why. - The data you are asking for is not present in the document
> - Be more precise by adding `In this document,`
> - Use correct semantic related to data extraction, for example replace `retrieve` by `extract`

## Web Scraping API

 In this example we will extract the data with the following LLM prompt:

 ```
Present me the product like you are a sales person, summarize it and give the top pro and cons from reviews in bullet point list
```

 

   

 

> Combined with cache feature, we cache the raw data from the website, allowing you to **re-extract the data with multiple extraction passes** at a **much faster speed** and **lower cost**. This applies to the following extraction types: - [Extraction Template](https://scrapfly.io/docs/extraction-api/rules-and-template)
> - [Extraction Model](https://scrapfly.io/docs/extraction-api/automatic-ai)
> - [LLM Extraction](https://scrapfly.io/docs/extraction-api/llm-prompt)
>  
>   ##### Learn more about cache feature
> 
> - [Cache feature](https://scrapfly.io/docs/scrape-api/cache)
> - [API Specification](https://scrapfly.io/docs/scrape-api/getting-started)

 [Ruby](#player-cf5808) [HTTP](#http-cf5808) 

   [  ](https://scrapfly.io/login "Sign in to try from the API player") 

 

 ```
# gem install httparty

require 'httparty'
require 'json'

# Build query parameters
params = {
  'tags' => "player,project:default",
  'extraction_prompt' => "Present me the product like you are a sales person, summarize it and give the top pro and cons from reviews in bullet point list",
  'cache' => true,
  'asp' => true,
  'render_js' => true,
  'key' => "__API_KEY__",
  'url' => "https://web-scraping.dev/product/1",
}

url = "https://api.scrapfly.io/scrape"

options = {
  query: params,
  timeout: 160,
  open_timeout: 10,
}

begin
  response = HTTParty.get(url, options)

  # Check for HTTP errors
  unless response.success?
    error_data = response.parsed_response
    error_msg = error_data['message'] || error_data['description'] || 'Request failed'
    raise "HTTP error #{response.code}: #{error_msg}"
  end

  data = response.parsed_response
  puts JSON.pretty_generate(data)

  # Access the scrape result
  puts data['result'] if data['result']

rescue HTTParty::Error => e
  STDERR.puts "Request failed: #{e.message}"
  raise
rescue StandardError => e
  STDERR.puts "Error: #{e.message}"
  raise
end

```

 

 ```
https://api.scrapfly.io/scrape?tags=player%2Cproject%3Adefault&extraction_prompt=Present+me+the+product+like+you+are+a+sales+person%2C+summarize+it+and+give+the+top+pro+and+cons+from+reviews+in+bullet+point+list&cache=true&asp=true&render_js=true&key=&url=https%3A%2F%2Fweb-scraping.dev%2Fproduct%2F1
```

 

 

 

 The full [Web Scraping API](https://scrapfly.io/docs/scrape-api/getting-started) response structure where the extracted data is available in the `result.extracted_data.data` field:

 ```
{
    "config" : {
        ...
    },
    "context": {
        ...
    },
    "result": {
        ...
        "content": ".... html content ... too long for the example",
        "content_encoding": "utf-8",
        "content_format": "raw",
        "content_type": "text/html; charset=utf-8",
        "duration": 3.7,
        "error": null,
        "extracted_data": {
            "content_type": "text/plain",
            "data": "Indulge your sweet tooth with our Box of Chocolate Candy! This delightful assortment features rich, flavorful chocolates with a smooth, creamy filling. Choose from zesty orange or sweet cherry flavors, and enjoy the perfect gift or treat yourself. \n\n**Pros:**\n\n* Delicious and flavorful chocolates\n* Variety of flavors to choose from\n* High-quality chocolate\n* Perfect for gifting or self-indulgence\n\n**Cons:**\n\n* Can be a bit pricey\n* Some customers may find the flavors too sweet\n"
        },
        "format": "text",
        "reason": "OK",
        "request_headers": [],
        "response_headers": {
            ...
        },
        "status": "DONE",
        "status_code": 200,
        "success": true,
        "url": "https://web-scraping.dev/product/1"
    }
}
```

 

   

 

## 🔥 Popular LLM Integration

Scrapfly is directly integrated with well known tools to simplify the LLM data retrieval

### LlamaIndex

 LlamaIndex, formerly known as GPT Index, is a data framework designed to facilitate the connection between large language models (LLMs) and a wide variety of data sources. It provides tools to effectively ingest, index, and query data within these models.

 [ Integrate Scrapfly with LlamaIndex ](https://docs.llamaindex.ai/en/stable/examples/data_connectors/WebPageDemo/?h=scrap#using-scrapfly)### Langchain

 LangChain is a robust framework designed for developing applications powered by language models. It focuses on enabling the creation of applications that can leverage the capabilities of large language models (LLMs) for a variety of use cases.

 [ Integrate Scrapfly with Langchain ](https://python.langchain.com/v0.2/docs/integrations/document_loaders/scrapfly/#scrapfly)## Limitation

- The maximum length of the prompt is 10,000 characters
- It's possible that we are not able to fulfill all request under heavy load &gt;1k req/s, and with the complexity on GPU shortage/quota the scaling is limited. Also we prioritize the request based on the plan, the free plan has the lowest priority. You will get an error code `ERR::EXTRACTION::OUT_OF_CAPACITY` if this happens.
- The maximum prompt execution time is 25 seconds. The biggest factor is the output (response) size, the bigger the output the longer it takes to process. We observe a TPS (Token per second) between 120 and 150, we expect to reach 500 tokens per second by the end of the year. 
    - **Web Scraping API**
        - [Checkout timeout documentation](https://scrapfly.io/docs/scrape-api/understand-timeout)
        - [Timeout parameter specification](https://scrapfly.io/docs/scrape-api/getting-started#api_param_timeout)
 
## Error Handling

 All related errors are listed below. You can see full description and example of error response on the [Errors section](https://scrapfly.io/docs/extraction-api/errors).

- [ERR::EXTRACTION::CONFIG\_ERROR](https://scrapfly.io/docs/extraction-api/error/ERR::EXTRACTION::CONFIG_ERROR "Parameters sent to the API are not valid") - Parameters sent to the API are not valid
- [ERR::EXTRACTION::CONTENT\_TYPE\_NOT\_SUPPORTED](https://scrapfly.io/docs/extraction-api/error/ERR::EXTRACTION::CONTENT_TYPE_NOT_SUPPORTED "The content type of the response is not supported for extraction.") - The content type of the response is not supported for extraction.
- [ERR::EXTRACTION::DATA\_ERROR](https://scrapfly.io/docs/extraction-api/error/ERR::EXTRACTION::DATA_ERROR "Extracted data is invalid or have an issue") - Extracted data is invalid or have an issue
- [ERR::EXTRACTION::INVALID\_RULE](https://scrapfly.io/docs/extraction-api/error/ERR::EXTRACTION::INVALID_RULE "The extraction rule is invalid") - The extraction rule is invalid
- [ERR::EXTRACTION::INVALID\_TEMPLATE](https://scrapfly.io/docs/extraction-api/error/ERR::EXTRACTION::INVALID_TEMPLATE "The template used for extraction is invalid") - The template used for extraction is invalid
- [ERR::EXTRACTION::NO\_CONTENT](https://scrapfly.io/docs/extraction-api/error/ERR::EXTRACTION::NO_CONTENT "Target response is empty") - Target response is empty
- [ERR::EXTRACTION::OPERATION\_TIMEOUT](https://scrapfly.io/docs/extraction-api/error/ERR::EXTRACTION::OPERATION_TIMEOUT "Extraction Operation Timeout") - Extraction Operation Timeout
- [ERR::EXTRACTION::OUT\_OF\_CAPACITY](https://scrapfly.io/docs/extraction-api/error/ERR::EXTRACTION::OUT_OF_CAPACITY "Not able to extract more data, backend are out of capacity, retry later.") - Not able to extract more data, backend are out of capacity, retry later.
- [ERR::EXTRACTION::TEMPLATE\_NOT\_FOUND](https://scrapfly.io/docs/extraction-api/error/ERR::EXTRACTION::TEMPLATE_NOT_FOUND "The provided template do not exist") - The provided template do not exist
- [ERR::EXTRACTION::TIMEOUT](https://scrapfly.io/docs/extraction-api/error/ERR::EXTRACTION::TIMEOUT "The extraction was too long or did not had enough time to complete") - The extraction was too long or did not had enough time to complete
 
## Pricing

 LLM extraction is billed **5 API Credits**.

For more information about the pricing you can [learn more on the dedicated section](https://scrapfly.io/docs/extraction-api/billing)