# Scrapfly Documentation

## Table of Contents

### Dashboard

- [Intro](https://scrapfly.io/docs)
- [Project](https://scrapfly.io/docs/project)
- [Account](https://scrapfly.io/docs/account)
- [Workspace & Team](https://scrapfly.io/docs/workspace-and-team)
- [Billing](https://scrapfly.io/docs/billing)

### Products

#### MCP Server

- [Getting Started](https://scrapfly.io/docs/mcp/getting-started)
- [Tools & API Spec](https://scrapfly.io/docs/mcp/tools)
- [Authentication](https://scrapfly.io/docs/mcp/authentication)
- [Examples & Use Cases](https://scrapfly.io/docs/mcp/examples)
- [FAQ](https://scrapfly.io/docs/mcp/faq)
##### Integrations

- [Overview](https://scrapfly.io/docs/mcp/integrations)
- [Claude Desktop](https://scrapfly.io/docs/mcp/integrations/claude-desktop)
- [Claude Code](https://scrapfly.io/docs/mcp/integrations/claude-code)
- [ChatGPT](https://scrapfly.io/docs/mcp/integrations/chatgpt)
- [Cursor](https://scrapfly.io/docs/mcp/integrations/cursor)
- [Cline](https://scrapfly.io/docs/mcp/integrations/cline)
- [Windsurf](https://scrapfly.io/docs/mcp/integrations/windsurf)
- [Zed](https://scrapfly.io/docs/mcp/integrations/zed)
- [Roo Code](https://scrapfly.io/docs/mcp/integrations/roo-code)
- [VS Code](https://scrapfly.io/docs/mcp/integrations/vscode)
- [LangChain](https://scrapfly.io/docs/mcp/integrations/langchain)
- [LlamaIndex](https://scrapfly.io/docs/mcp/integrations/llamaindex)
- [CrewAI](https://scrapfly.io/docs/mcp/integrations/crewai)
- [OpenAI](https://scrapfly.io/docs/mcp/integrations/openai)
- [n8n](https://scrapfly.io/docs/mcp/integrations/n8n)
- [Make](https://scrapfly.io/docs/mcp/integrations/make)
- [Zapier](https://scrapfly.io/docs/mcp/integrations/zapier)
- [Vapi AI](https://scrapfly.io/docs/mcp/integrations/vapi)
- [Agent Builder](https://scrapfly.io/docs/mcp/integrations/agent-builder)
- [Custom Client](https://scrapfly.io/docs/mcp/integrations/custom-client)


#### Web Scraping API

- [Getting Started](https://scrapfly.io/docs/scrape-api/getting-started)
- [API Specification]()
- [Monitoring](https://scrapfly.io/docs/monitoring)
- [Customize Request](https://scrapfly.io/docs/scrape-api/custom)
- [Debug](https://scrapfly.io/docs/scrape-api/debug)
- [Anti Scraping Protection](https://scrapfly.io/docs/scrape-api/anti-scraping-protection)
- [Proxy](https://scrapfly.io/docs/scrape-api/proxy)
- [Proxy Mode](https://scrapfly.io/docs/scrape-api/proxy-mode)
- [Proxy Mode - Screaming Frog](https://scrapfly.io/docs/scrape-api/proxy-mode/screaming-frog)
- [Proxy Mode - Apify](https://scrapfly.io/docs/scrape-api/proxy-mode/apify)
- [(Auto) Data Extraction](https://scrapfly.io/docs/scrape-api/extraction)
- [Javascript Rendering](https://scrapfly.io/docs/scrape-api/javascript-rendering)
- [Javascript Scenario](https://scrapfly.io/docs/scrape-api/javascript-scenario)
- [SSL](https://scrapfly.io/docs/scrape-api/ssl)
- [DNS](https://scrapfly.io/docs/scrape-api/dns)
- [Cache](https://scrapfly.io/docs/scrape-api/cache)
- [Session](https://scrapfly.io/docs/scrape-api/session)
- [Webhook](https://scrapfly.io/docs/scrape-api/webhook)
- [Screenshot](https://scrapfly.io/docs/scrape-api/screenshot)
- [Errors](https://scrapfly.io/docs/scrape-api/errors)
- [Timeout](https://scrapfly.io/docs/scrape-api/understand-timeout)
- [Throttling](https://scrapfly.io/docs/throttling)
- [Troubleshoot](https://scrapfly.io/docs/scrape-api/troubleshoot)
- [Billing](https://scrapfly.io/docs/scrape-api/billing)
- [FAQ](https://scrapfly.io/docs/scrape-api/faq)

#### Crawler API

- [Getting Started](https://scrapfly.io/docs/crawler-api/getting-started)
- [API Specification]()
- [Retrieving Results](https://scrapfly.io/docs/crawler-api/results)
- [WARC Format](https://scrapfly.io/docs/crawler-api/warc-format)
- [Data Extraction](https://scrapfly.io/docs/crawler-api/extraction-rules)
- [Webhook](https://scrapfly.io/docs/crawler-api/webhook)
- [Billing](https://scrapfly.io/docs/crawler-api/billing)
- [Errors](https://scrapfly.io/docs/crawler-api/errors)
- [Troubleshoot](https://scrapfly.io/docs/crawler-api/troubleshoot)
- [FAQ](https://scrapfly.io/docs/crawler-api/faq)

#### Screenshot API

- [Getting Started](https://scrapfly.io/docs/screenshot-api/getting-started)
- [API Specification]()
- [Accessibility Testing](https://scrapfly.io/docs/screenshot-api/accessibility)
- [Webhook](https://scrapfly.io/docs/screenshot-api/webhook)
- [Billing](https://scrapfly.io/docs/screenshot-api/billing)
- [Errors](https://scrapfly.io/docs/screenshot-api/errors)

#### Extraction API

- [Getting Started](https://scrapfly.io/docs/extraction-api/getting-started)
- [API Specification]()
- [Rules Template](https://scrapfly.io/docs/extraction-api/rules-and-template)
- [LLM Extraction](https://scrapfly.io/docs/extraction-api/llm-prompt)
- [AI Auto Extraction](https://scrapfly.io/docs/extraction-api/automatic-ai)
- [Webhook](https://scrapfly.io/docs/extraction-api/webhook)
- [Billing](https://scrapfly.io/docs/extraction-api/billing)
- [Errors](https://scrapfly.io/docs/extraction-api/errors)
- [FAQ](https://scrapfly.io/docs/extraction-api/faq)

#### Proxy Saver

- [Getting Started](https://scrapfly.io/docs/proxy-saver/getting-started)
- [Fingerprints](https://scrapfly.io/docs/proxy-saver/fingerprints)
- [Optimizations](https://scrapfly.io/docs/proxy-saver/optimizations)
- [SSL Certificates](https://scrapfly.io/docs/proxy-saver/certificates)
- [Protocols](https://scrapfly.io/docs/proxy-saver/protocols)
- [Pacfile](https://scrapfly.io/docs/proxy-saver/pacfile)
- [Secure Credentials](https://scrapfly.io/docs/proxy-saver/security)
- [Billing](https://scrapfly.io/docs/proxy-saver/billing)

#### Cloud Browser API

- [Getting Started](https://scrapfly.io/docs/cloud-browser-api/getting-started)
- [Proxy & Geo-Targeting](https://scrapfly.io/docs/cloud-browser-api/proxy)
- [Unblock API](https://scrapfly.io/docs/cloud-browser-api/unblock)
- [File Downloads](https://scrapfly.io/docs/cloud-browser-api/file-downloads)
- [Session Resume](https://scrapfly.io/docs/cloud-browser-api/session-resume)
- [Human-in-the-Loop](https://scrapfly.io/docs/cloud-browser-api/human-in-the-loop)
- [Debug Mode](https://scrapfly.io/docs/cloud-browser-api/debug-mode)
- [Bring Your Own Proxy](https://scrapfly.io/docs/cloud-browser-api/bring-your-own-proxy)
- [Browser Extensions](https://scrapfly.io/docs/cloud-browser-api/extensions)
##### Integrations

- [Puppeteer](https://scrapfly.io/docs/cloud-browser-api/puppeteer)
- [Playwright](https://scrapfly.io/docs/cloud-browser-api/playwright)
- [Selenium](https://scrapfly.io/docs/cloud-browser-api/selenium)
- [Vercel Agent Browser](https://scrapfly.io/docs/cloud-browser-api/agent-browser)
- [Browser Use](https://scrapfly.io/docs/cloud-browser-api/browser-use)
- [Stagehand](https://scrapfly.io/docs/cloud-browser-api/stagehand)

- [Billing](https://scrapfly.io/docs/cloud-browser-api/billing)
- [Errors](https://scrapfly.io/docs/cloud-browser-api/errors)


### Tools

- [Antibot Detector](https://scrapfly.io/docs/tools/antibot-detector)

### SDK

- [Golang](https://scrapfly.io/docs/sdk/golang)
- [Python](https://scrapfly.io/docs/sdk/python)
- [TypeScript](https://scrapfly.io/docs/sdk/typescript)
- [Scrapy](https://scrapfly.io/docs/sdk/scrapy)

### Integrations

- [Getting Started](https://scrapfly.io/docs/integration/getting-started)
- [LangChain](https://scrapfly.io/docs/integration/langchain)
- [LlamaIndex](https://scrapfly.io/docs/integration/llamaindex)
- [CrewAI](https://scrapfly.io/docs/integration/crewai)
- [Zapier](https://scrapfly.io/docs/integration/zapier)
- [Make](https://scrapfly.io/docs/integration/make)
- [n8n](https://scrapfly.io/docs/integration/n8n)

### Academy

- [Overview](https://scrapfly.io/academy)
- [Web Scraping Overview](https://scrapfly.io/academy/scraping-overview)
- [Tools](https://scrapfly.io/academy/tools-overview)
- [Reverse Engineering](https://scrapfly.io/academy/reverse-engineering)
- [Static Scraping](https://scrapfly.io/academy/static-scraping)
- [HTML Parsing](https://scrapfly.io/academy/html-parsing)
- [Dynamic Scraping](https://scrapfly.io/academy/dynamic-scraping)
- [Hidden API Scraping](https://scrapfly.io/academy/hidden-api-scraping)
- [Headless Browsers](https://scrapfly.io/academy/headless-browsers)
- [Hidden Web Data](https://scrapfly.io/academy/hidden-web-data)
- [JSON Parsing](https://scrapfly.io/academy/json-parsing)
- [Data Processing](https://scrapfly.io/academy/data-processing)
- [Scaling](https://scrapfly.io/academy/scaling)
- [Walkthrough Summary](https://scrapfly.io/academy/walkthrough-summary)
- [Scraper Blocking](https://scrapfly.io/academy/scraper-blocking)
- [Proxies](https://scrapfly.io/academy/proxies)

---

#  Scrapfly Extraction API 

 [  View as markdown ](https://scrapfly.io/?view=markdown)   Copy for LLM    Copy for LLM  [     Open in ChatGPT ](https://chatgpt.com/?hints=search&prompt=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fextraction-api%2Fgetting-started%3Flanguage%3Dnode_js%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Claude ](https://claude.ai/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fextraction-api%2Fgetting-started%3Flanguage%3Dnode_js%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Perplexity ](https://www.perplexity.ai/search/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fextraction-api%2Fgetting-started%3Flanguage%3Dnode_js%20so%20I%20can%20ask%20questions%20about%20it.) 

 

 

 Extraction API allows to extraction of structured data from any text content such as HTML, Text, Markdown using AI, LLM and custom parsing instructions.

 Using Extraction API the data parsing possibilities are essentially endless but here are the most common use cases:

- Use LLM to ask questions about the content.
- Use LLM to extract structured data like JSON or CSV and apply data formatting or conversions.
- Use predefined extraction models for automatic extraction of product, review, real-estate listing and article data.
- Use custom extraction templates to parse data exactly as specified.
 
> If you need to **combine Web Scraping and Data Extraction**, use our [Web Scraping API](https://scrapfly.io/docs/scrape-api/extraction) which directly integrates the extraction API.

 Minimal API call is a `POST` request with `key` parameter and one of extraction options: `extraction_template`, `extraction_prompt` or `extraction_model`.

 ```
https://api.scrapfly.io/extraction?key=
```

 

   

 

 

 

 See dedicated feature docs for exact extraction type use:

 [  Auto Model ](https://scrapfly.io/docs/extraction-api/automatic-ai#usage) [  LLM Prompt ](https://scrapfly.io/docs/extraction-api/llm-prompt#usage) [  Template Extraction ](https://scrapfly.io/docs/extraction-api/rules-and-template#usage) 

## Intro Video

  

## On Steroids 

- Three different **extraction methods** available: 
    - [Custom Structured Extraction Templates](https://scrapfly.io/docs/extraction-api/rules-and-template)
    - [LLM Prompting](https://scrapfly.io/docs/extraction-api/llm-prompt)
    - [Automatic Extraction AI Models](https://scrapfly.io/docs/extraction-api/automatic-ai)
- **Automatically** prepare content for extraction.
- **Data quality metrics** are available (for the predefined AI extraction models).
- **[Automatic decompression](#automatic_decompression)**
 
## Quality of Life 

- Multi project/env support through [Project Management  ](https://scrapfly.io/docs/project?language=node_js)
- Server-side cache for repeated extraction requests through the `cache` parameter.
- [Status page](https://scrapfly.statuspage.io/) with a notification subscription.
- Full API transparency through useful meta headers: 
    - **X-Scrapfly-Api-Cost** API Cost billed
    - **X-Scrapfly-Remaining-Api-Credit** Remaining Api Credit, if 0, billed in extra credit
    - **X-Scrapfly-Account-Concurrent-Usage** You current concurrency usage of your account
    - **X-Scrapfly-Account-Remaining-Concurrent-Usage** Maximum concurrency allowed by the account
    - **X-Scrapfly-Project-Concurrent-Usage** Concurrency usage of the project
    - **X-Scrapfly-Project-Remaining-Concurrent-Usage** If the concurrency limit is set on the project otherwise equal to the account concurrency
     
     Concurrency is based on the subscription tier
 
## Billing 

 Scrapfly uses a credit system to bill Extraction API requests.

 All extraction methods have a fixed cost of  **5 API Credits**.

 [Extraction API Billing](https://scrapfly.io/docs/extraction-api/billing).

## Errors 

 Scrapfly uses conventional HTTP response codes to indicate the success or failure of an API request.

 **Codes in the 2xx** range indicate success.

 **Codes in the 4xx** range indicate an error that failed given the information provided (e.g., a required parameter was omitted, not permitted, max concurrency reached, etc.).

 **Codes in the 5xx** range indicate an error with Scrapfly's servers.

---

 **HTTP 422 - Request Failed** provide extra headers in order to help as much as possible:

- **X-Scrapfly-Reject-Code:** Error Code
- **X-Scrapfly-Reject-Description:** URL to the related documentation
- **X-Scrapfly-Reject-Retryable:** Indicate if the request is retryable
 
> It is important to properly handle HTTP client errors in order to access the error headers and body. These details contain valuable information for troubleshooting, resolving the issue or reaching the support.

###   HTTP Status Code Summary 

 | 200 - OK | Everything worked as expected. |
|---|---|
| 400 - Bad Request | The request was unacceptable, often due to missing a required parameter or a bad value or a bad format. |
| 401 - Unauthorized | No valid API key provided. |
| 402 - Payment Required | A payment issue occur and need to be resolved |
| 403 - Forbidden | The API key doesn't have permissions to perform the request. |
| 422 - Request Failed | The parameters were valid but the request failed. |
| 429 - Too Many Requests | All free quota used or max allowed concurrency or domain throttled |
| 500, 502, 503 - Server Errors | Something went wrong on Scrapfly's end. |
| 504 - Timeout | The request have timeout |
| You can check out the [full error list](https://scrapfly.io/docs/extraction-api/errors) to learn more. |

 

 

 

 

 

---

## Specification

 Scrapfly has loads of features and the best way to discover them is through the specification docs below. **For this example, you need to have file `test.html` in your current directory**. In our example `test.html` contains the content of the page: <https://web-scraping.dev/product/1>

We will use prompt extraction and ask to extract the product price in JSON format

 ```
curl -X POST \
-H "content-type: text/html" \
"https://api.scrapfly.io/extraction?key=&extraction_prompt=Extract+the+price+and+currency+of+the+product+in+json+format" \
-d @test.html
```

 

   

 

 

 ```
{"content_type":"application/json","data":{"currency":"$","price":"9.99"}}
```

 

   

 

 



> The following data format is supported `html`, `markdown`, `text`, `xml` [ERR::EXTRACTION::CONTENT\_TYPE\_NOT\_SUPPORTED](https://scrapfly.io/docs/screenshot-api/error/ERR::EXTRACTION::CONTENT_TYPE_NOT_SUPPORTED)

   

 [`key`](#api_param_key) 

  required 

    

 

API Key to authenticate the call. Find it on your [dashboard](https://scrapfly.io/dashboard/webhook)

`scp-live-xxx...`

 

 

 [`body`](#api_param_body) 

  required 

    

 

Request body containing the content to extract data from. Format specified by `content-type` header or `content_type` parameter

`-d @page.html`

 

 

 [`content_type`](#api_param_content_type) 

  required 

    

 

Content type of document in body. Use this parameter or `content-type` header. Parameter takes priority over header

`text/html` `text/markdown` `text/plain`

 

   More details**Supported formats:**

- `text/html` - HTML documents
- `text/markdown` - Markdown content
- `text/plain` - Plain text
- `text/xml` - XML documents
 
**How to set:**

- **Parameter:** `content_type=text/html` (takes priority)
- **Header:** `Content-Type: text/html`
 
 Incorrect content type causes [ERR::EXTRACTION::CONTENT\_TYPE\_NOT\_SUPPORTED](https://scrapfly.io/docs/extraction-api/error/ERR::EXTRACTION::CONTENT_TYPE_NOT_SUPPORTED)

 

  

  Extraction Methods 

 [`extraction_template`](#api_param_extraction_template) 

 default: null 

    [  ](https://scrapfly.io/docs/extraction-api/rules-and-template?iframe=1 "Preview") [  ](https://scrapfly.io/docs/extraction-api/rules-and-template "Docs") 

 

Define an extraction template for structured data. Use ephemeral (on-the-fly) or stored template by name

`ephemeral:base64(json_template)`

 

   More details**Template types:**

- **Ephemeral:** `ephemeral:base64(json_template)` - Define template inline
- **Stored:** `my-template-name` - Reference template saved in dashboard
 
**Use cases:**

- Consistent structured data extraction across pages
- Reusable extraction logic for similar page structures
- Complex multi-field extraction with selectors
 
 Cost: 5 API Credits per extraction request.

 

  

 [`extraction_prompt`](#api_param_extraction_prompt) 

  popular default: null 

    [  ](https://scrapfly.io/docs/extraction-api/llm-prompt?iframe=1 "Preview") [  ](https://scrapfly.io/docs/extraction-api/llm-prompt "Docs") 

 

LLM instruction to extract data or ask questions. [Must be URL encoded ](https://scrapfly.io/web-scraping-tools/urlencode)

`Summarize this document`

 

   More details**Use cases:**

- Question answering about page content
- Data extraction in JSON/CSV format
- Content summarization and analysis
- Data formatting and conversions
 
**Examples:**

- `Extract the product price and name in JSON`
- `Summarize this article in 3 bullet points`
- `List all links on the page`
 
 Cost: 5 API Credits per extraction. Prompt must be URL encoded.

 

  

 [`extraction_model`](#api_param_extraction_model) 

  popular default: null 

    [  ](https://scrapfly.io/docs/extraction-api/automatic-ai?iframe=1 "Preview") [  ](https://scrapfly.io/docs/extraction-api/automatic-ai "Docs") 

 

AI auto-extraction for structured data using predefined models

`product` `article` `review_list`

 

   More details**Available models:**

- `product` - E-commerce product data
- `article` - News/blog article content
- `review_list` - User reviews and ratings
- `real_estate_listing` - Property listings
 
**Features:**

- Pre-trained AI models for common data types
- Automatic content preparation
- Data quality metrics included in response
 
 Cost: 5 API Credits per extraction. Models return structured data with quality scores.

 

  

  Document Options 

 [`url`](#api_param_url) 

 default: null 

    

 

Base URL to transform relative URLs to absolute. [Must be URL encoded ](https://scrapfly.io/web-scraping-tools/urlencode)

`https://example.com/page`

 

 

 [`charset`](#api_param_charset) 

 default: auto 

    

 

Document charset. Use `auto` for detection. Bad charset causes display issues (accents, special chars)

`utf-8` `ascii` `auto`

 

 

 [`timeout`](#api_param_timeout) 

 default: 60 

    

 

Maximum time in seconds for extraction processing. Useful for complex extractions on large documents

`60` `120` `155`

 

   More details**Configuration:**

- **Minimum:** 60 seconds
- **Maximum:** 155 seconds
- **Default:** 60 seconds
 
**Use cases:**

- Large documents requiring more processing time
- Complex extraction prompts with multiple fields
- Documents with extensive nested data structures
 
 Extraction cost is only charged upon successful completion. If timeout occurs, no charge is applied.

 

  

  Webhook 

 [`webhook_name`](#api_param_webhook_name) 

  popular default: null 

    [  ](https://scrapfly.io/docs/extraction-api/webhook?iframe=1 "Preview") [  ](https://scrapfly.io/docs/extraction-api/webhook "Docs") 

 

Queue request and send response to webhook endpoint. Create webhooks in your [dashboard](https://scrapfly.io/dashboard/webhook)

`my-webhook-name`

 

 

 



## Automatic Decompression

 You can send compressed document, we support `gzip`, `zstd`, `deflate`. When sending a compressed document, you must announce it via `Content-Encoding` headers. Example: `Content-Encoding: gzip`

  Example of usage with curl1. **Download a document to parse** ```
    curl https://web-scraping.dev/product/1 > product.html
    ```
2. **Compress your document** ```
    gzip -k product.html
    ```
    
     
    
       
    
     
    
     
    
     *This command will create `product.html.gz`*
3. **Send the compressed document to the API** ```
    curl -X POST \
    -H "content-type: text/html" \
    -H "content-encoding: gzip" \
    "https://api.scrapfly.io/extraction?key=&url=https%3A%2F%2Fweb-scraping.dev&extraction_prompt=Extract%20the%20product%20specification%20in%20json%20format" \
    --data-binary @product.html.gz
    
    ```
 
 ## Related Errors

 All related errors are listed below. You can see the full description and an example of error response on the [Errors section](https://scrapfly.io/docs/extraction-api/errors).

- [ERR::EXTRACTION::CONFIG\_ERROR](https://scrapfly.io/docs/extraction-api/error/ERR::EXTRACTION::CONFIG_ERROR "Parameters sent to the API are not valid") - Parameters sent to the API are not valid
- [ERR::EXTRACTION::CONTENT\_TYPE\_NOT\_SUPPORTED](https://scrapfly.io/docs/extraction-api/error/ERR::EXTRACTION::CONTENT_TYPE_NOT_SUPPORTED "The content type of the response is not supported for extraction.") - The content type of the response is not supported for extraction.
- [ERR::EXTRACTION::DATA\_ERROR](https://scrapfly.io/docs/extraction-api/error/ERR::EXTRACTION::DATA_ERROR "Extracted data is invalid or have an issue") - Extracted data is invalid or have an issue
- [ERR::EXTRACTION::INVALID\_RULE](https://scrapfly.io/docs/extraction-api/error/ERR::EXTRACTION::INVALID_RULE "The extraction rule is invalid") - The extraction rule is invalid
- [ERR::EXTRACTION::INVALID\_TEMPLATE](https://scrapfly.io/docs/extraction-api/error/ERR::EXTRACTION::INVALID_TEMPLATE "The template used for extraction is invalid") - The template used for extraction is invalid
- [ERR::EXTRACTION::NO\_CONTENT](https://scrapfly.io/docs/extraction-api/error/ERR::EXTRACTION::NO_CONTENT "Target response is empty") - Target response is empty
- [ERR::EXTRACTION::OPERATION\_TIMEOUT](https://scrapfly.io/docs/extraction-api/error/ERR::EXTRACTION::OPERATION_TIMEOUT "Extraction Operation Timeout") - Extraction Operation Timeout
- [ERR::EXTRACTION::OUT\_OF\_CAPACITY](https://scrapfly.io/docs/extraction-api/error/ERR::EXTRACTION::OUT_OF_CAPACITY "Not able to extract more data, backend are out of capacity, retry later.") - Not able to extract more data, backend are out of capacity, retry later.
- [ERR::EXTRACTION::TEMPLATE\_NOT\_FOUND](https://scrapfly.io/docs/extraction-api/error/ERR::EXTRACTION::TEMPLATE_NOT_FOUND "The provided template do not exist") - The provided template do not exist
- [ERR::EXTRACTION::TIMEOUT](https://scrapfly.io/docs/extraction-api/error/ERR::EXTRACTION::TIMEOUT "The extraction was too long or did not had enough time to complete") - The extraction was too long or did not had enough time to complete