# Scrapfly Documentation

## Table of Contents

### Dashboard

- [Intro](https://scrapfly.io/docs)
- [Project](https://scrapfly.io/docs/project)
- [Account](https://scrapfly.io/docs/account)
- [Workspace & Team](https://scrapfly.io/docs/workspace-and-team)
- [Billing](https://scrapfly.io/docs/billing)

### Products

#### MCP Server

- [Getting Started](https://scrapfly.io/docs/mcp/getting-started)
- [Tools & API Spec](https://scrapfly.io/docs/mcp/tools)
- [Authentication](https://scrapfly.io/docs/mcp/authentication)
- [Examples & Use Cases](https://scrapfly.io/docs/mcp/examples)
- [FAQ](https://scrapfly.io/docs/mcp/faq)
##### Integrations

- [Overview](https://scrapfly.io/docs/mcp/integrations)
- [Claude Desktop](https://scrapfly.io/docs/mcp/integrations/claude-desktop)
- [Claude Code](https://scrapfly.io/docs/mcp/integrations/claude-code)
- [ChatGPT](https://scrapfly.io/docs/mcp/integrations/chatgpt)
- [Cursor](https://scrapfly.io/docs/mcp/integrations/cursor)
- [Cline](https://scrapfly.io/docs/mcp/integrations/cline)
- [Windsurf](https://scrapfly.io/docs/mcp/integrations/windsurf)
- [Zed](https://scrapfly.io/docs/mcp/integrations/zed)
- [Roo Code](https://scrapfly.io/docs/mcp/integrations/roo-code)
- [VS Code](https://scrapfly.io/docs/mcp/integrations/vscode)
- [LangChain](https://scrapfly.io/docs/mcp/integrations/langchain)
- [LlamaIndex](https://scrapfly.io/docs/mcp/integrations/llamaindex)
- [CrewAI](https://scrapfly.io/docs/mcp/integrations/crewai)
- [OpenAI](https://scrapfly.io/docs/mcp/integrations/openai)
- [n8n](https://scrapfly.io/docs/mcp/integrations/n8n)
- [Make](https://scrapfly.io/docs/mcp/integrations/make)
- [Zapier](https://scrapfly.io/docs/mcp/integrations/zapier)
- [Vapi AI](https://scrapfly.io/docs/mcp/integrations/vapi)
- [Agent Builder](https://scrapfly.io/docs/mcp/integrations/agent-builder)
- [Custom Client](https://scrapfly.io/docs/mcp/integrations/custom-client)


#### Web Scraping API

- [Getting Started](https://scrapfly.io/docs/scrape-api/getting-started)
- [API Specification]()
- [Monitoring](https://scrapfly.io/docs/monitoring)
- [Customize Request](https://scrapfly.io/docs/scrape-api/custom)
- [Debug](https://scrapfly.io/docs/scrape-api/debug)
- [Anti Scraping Protection](https://scrapfly.io/docs/scrape-api/anti-scraping-protection)
- [Proxy](https://scrapfly.io/docs/scrape-api/proxy)
- [Proxy Mode](https://scrapfly.io/docs/scrape-api/proxy-mode)
- [Proxy Mode - Screaming Frog](https://scrapfly.io/docs/scrape-api/proxy-mode/screaming-frog)
- [Proxy Mode - Apify](https://scrapfly.io/docs/scrape-api/proxy-mode/apify)
- [(Auto) Data Extraction](https://scrapfly.io/docs/scrape-api/extraction)
- [Javascript Rendering](https://scrapfly.io/docs/scrape-api/javascript-rendering)
- [Javascript Scenario](https://scrapfly.io/docs/scrape-api/javascript-scenario)
- [SSL](https://scrapfly.io/docs/scrape-api/ssl)
- [DNS](https://scrapfly.io/docs/scrape-api/dns)
- [Cache](https://scrapfly.io/docs/scrape-api/cache)
- [Batch (Multi-URL Scraping)](https://scrapfly.io/docs/scrape-api/batch)
- [Session](https://scrapfly.io/docs/scrape-api/session)
- [Webhook](https://scrapfly.io/docs/scrape-api/webhook)
- [Screenshot](https://scrapfly.io/docs/scrape-api/screenshot)
- [Errors](https://scrapfly.io/docs/scrape-api/errors)
- [Timeout](https://scrapfly.io/docs/scrape-api/understand-timeout)
- [Throttling](https://scrapfly.io/docs/throttling)
- [Troubleshoot](https://scrapfly.io/docs/scrape-api/troubleshoot)
- [Billing](https://scrapfly.io/docs/scrape-api/billing)
- [FAQ](https://scrapfly.io/docs/scrape-api/faq)

#### Crawler API

- [Getting Started](https://scrapfly.io/docs/crawler-api/getting-started)
- [API Specification]()
- [Retrieving Results](https://scrapfly.io/docs/crawler-api/results)
- [WARC Format](https://scrapfly.io/docs/crawler-api/warc-format)
- [Data Extraction](https://scrapfly.io/docs/crawler-api/extraction-rules)
- [Webhook](https://scrapfly.io/docs/crawler-api/webhook)
- [Billing](https://scrapfly.io/docs/crawler-api/billing)
- [Errors](https://scrapfly.io/docs/crawler-api/errors)
- [Troubleshoot](https://scrapfly.io/docs/crawler-api/troubleshoot)
- [FAQ](https://scrapfly.io/docs/crawler-api/faq)

#### Screenshot API

- [Getting Started](https://scrapfly.io/docs/screenshot-api/getting-started)
- [API Specification]()
- [Accessibility Testing](https://scrapfly.io/docs/screenshot-api/accessibility)
- [Webhook](https://scrapfly.io/docs/screenshot-api/webhook)
- [Billing](https://scrapfly.io/docs/screenshot-api/billing)
- [Errors](https://scrapfly.io/docs/screenshot-api/errors)

#### Extraction API

- [Getting Started](https://scrapfly.io/docs/extraction-api/getting-started)
- [API Specification]()
- [Rules Template](https://scrapfly.io/docs/extraction-api/rules-and-template)
- [LLM Extraction](https://scrapfly.io/docs/extraction-api/llm-prompt)
- [AI Auto Extraction](https://scrapfly.io/docs/extraction-api/automatic-ai)
- [Webhook](https://scrapfly.io/docs/extraction-api/webhook)
- [Billing](https://scrapfly.io/docs/extraction-api/billing)
- [Errors](https://scrapfly.io/docs/extraction-api/errors)
- [FAQ](https://scrapfly.io/docs/extraction-api/faq)

#### Proxy Saver

- [Getting Started](https://scrapfly.io/docs/proxy-saver/getting-started)
- [Fingerprints](https://scrapfly.io/docs/proxy-saver/fingerprints)
- [Optimizations](https://scrapfly.io/docs/proxy-saver/optimizations)
- [SSL Certificates](https://scrapfly.io/docs/proxy-saver/certificates)
- [Protocols](https://scrapfly.io/docs/proxy-saver/protocols)
- [Pacfile](https://scrapfly.io/docs/proxy-saver/pacfile)
- [Secure Credentials](https://scrapfly.io/docs/proxy-saver/security)
- [Billing](https://scrapfly.io/docs/proxy-saver/billing)

#### Cloud Browser API

- [Getting Started](https://scrapfly.io/docs/cloud-browser-api/getting-started)
- [Proxy & Geo-Targeting](https://scrapfly.io/docs/cloud-browser-api/proxy)
- [Unblock API](https://scrapfly.io/docs/cloud-browser-api/unblock)
- [File Downloads](https://scrapfly.io/docs/cloud-browser-api/file-downloads)
- [Session Resume](https://scrapfly.io/docs/cloud-browser-api/session-resume)
- [Human-in-the-Loop](https://scrapfly.io/docs/cloud-browser-api/human-in-the-loop)
- [Debug Mode](https://scrapfly.io/docs/cloud-browser-api/debug-mode)
- [Bring Your Own Proxy](https://scrapfly.io/docs/cloud-browser-api/bring-your-own-proxy)
- [Browser Extensions](https://scrapfly.io/docs/cloud-browser-api/extensions)
- [Native Browser MCP](https://scrapfly.io/docs/cloud-browser-api/mcp)
- [DevTools Protocol](https://scrapfly.io/docs/cloud-browser-api/cdp-reference)
##### Integrations

- [Puppeteer](https://scrapfly.io/docs/cloud-browser-api/puppeteer)
- [Playwright](https://scrapfly.io/docs/cloud-browser-api/playwright)
- [Selenium](https://scrapfly.io/docs/cloud-browser-api/selenium)
- [Vercel Agent Browser](https://scrapfly.io/docs/cloud-browser-api/agent-browser)
- [Browser Use](https://scrapfly.io/docs/cloud-browser-api/browser-use)
- [Stagehand](https://scrapfly.io/docs/cloud-browser-api/stagehand)

- [Billing](https://scrapfly.io/docs/cloud-browser-api/billing)
- [Errors](https://scrapfly.io/docs/cloud-browser-api/errors)


### Tools

- [Antibot Detector](https://scrapfly.io/docs/tools/antibot-detector)

### SDK

- [Golang](https://scrapfly.io/docs/sdk/golang)
- [Python](https://scrapfly.io/docs/sdk/python)
- [Rust](https://scrapfly.io/docs/sdk/rust)
- [TypeScript](https://scrapfly.io/docs/sdk/typescript)
- [Scrapy](https://scrapfly.io/docs/sdk/scrapy)

### Integrations

- [Getting Started](https://scrapfly.io/docs/integration/getting-started)
- [LangChain](https://scrapfly.io/docs/integration/langchain)
- [LlamaIndex](https://scrapfly.io/docs/integration/llamaindex)
- [CrewAI](https://scrapfly.io/docs/integration/crewai)
- [Zapier](https://scrapfly.io/docs/integration/zapier)
- [Make](https://scrapfly.io/docs/integration/make)
- [n8n](https://scrapfly.io/docs/integration/n8n)

### Academy

- [Overview](https://scrapfly.io/academy)
- [Web Scraping Overview](https://scrapfly.io/academy/scraping-overview)
- [Tools](https://scrapfly.io/academy/tools-overview)
- [Reverse Engineering](https://scrapfly.io/academy/reverse-engineering)
- [Static Scraping](https://scrapfly.io/academy/static-scraping)
- [HTML Parsing](https://scrapfly.io/academy/html-parsing)
- [Dynamic Scraping](https://scrapfly.io/academy/dynamic-scraping)
- [Hidden API Scraping](https://scrapfly.io/academy/hidden-api-scraping)
- [Headless Browsers](https://scrapfly.io/academy/headless-browsers)
- [Hidden Web Data](https://scrapfly.io/academy/hidden-web-data)
- [JSON Parsing](https://scrapfly.io/academy/json-parsing)
- [Data Processing](https://scrapfly.io/academy/data-processing)
- [Scaling](https://scrapfly.io/academy/scaling)
- [Walkthrough Summary](https://scrapfly.io/academy/walkthrough-summary)
- [Scraper Blocking](https://scrapfly.io/academy/scraper-blocking)
- [Proxies](https://scrapfly.io/academy/proxies)

---

# Scrapy - Web Scraping Framework

 [  View as markdown ](https://scrapfly.io/?view=markdown)   Copy for LLM    Copy for LLM  [     Open in ChatGPT ](https://chatgpt.com/?hints=search&prompt=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fsdk%2Fscrapy%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Claude ](https://claude.ai/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fsdk%2Fscrapy%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Perplexity ](https://www.perplexity.ai/search/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fsdk%2Fscrapy%20so%20I%20can%20ask%20questions%20about%20it.) 

 

 

     X     

## Introduction

 **[Scrapy](https://scrapy.org/)** is a well known web scraping framework written in python. Massively adopted by community. The integration replace all the network part to rely on our API easily. **Scrapy** documentation is available [here](https://docs.scrapy.org/en/latest/)

 Scrapy Integration is part of our [Python SDK](https://scrapfly.io/docs/sdk/python). Source code is available on [ Github](https://github.com/scrapfly/python-scrapfly) **scrapfly-sdk** package is available through [PyPi](https://pypi.org).

 ```
pip install 'scrapfly-sdk[scrapy]'
```

 

   

 

## What's Changed?

 [Python API](https://scrapfly.github.io/python-scrapfly/scrapfly/scrapy/index.html) is available to get details of objects

### Objects

- `scrapy.http.Request` -&gt; [`scrapfly.scrapy.request.ScrapflyScrapyRequest`](https://scrapfly.github.io/python-scrapfly/scrapfly/scrapy/request.html)
- `scrapfly.scrapy.response` -&gt; [`scrapfly.scrapy.response.ScrapyResponse`](https://scrapfly.github.io/python-scrapfly/scrapfly/scrapy/response.html)
- `scrapy.spiders.Spider` -&gt; [`scrapfly.scrapy.spider.ScrapflySpider`](https://scrapfly.github.io/python-scrapfly/scrapfly/scrapy/spider.html)
 
### Middlewares

 Following middleware are disabled, because they are not relevant when using **Scrapfly** :

- `scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware`
- `scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware`
- `scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware`
- `scrapy.downloadermiddlewares.useragent.UserAgentMiddleware`
- `scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware`
- `scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware`
- `scrapy.downloadermiddlewares.redirect.RedirectMiddleware`
- `scrapy.downloadermiddlewares.cookies.CookiesMiddleware`
- `scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware`
 
 Internal HTTP / HTTPS downloader are replaced :

- `scrapy.core.downloader.handlers.http11.HTTP11DownloadHandler` -&gt; `scrapfly.scrapy.downloader.ScrapflyHTTPDownloader`
 
## Stats Collector

All **Scrapfly** metrics are prefix by `Scrapfly`. Following Scrapfly metrics are avaiable :

- **Scrapfly/api\_call\_cost** - (int) *Sum of billed API Credits against your quota*
 
> Complete documentation about stats collector is available here: <https://docs.scrapy.org/en/latest/topics/stats.html>

## Settings Configuration

 ```
SCRAPFLY_API_KEY = '{{ YOUR_API_KEY }}'
CONCURRENT_REQUESTS = 2  # Adjust according your plan limit rate and your needs

```

 

   

 

## How to use equivalent of API parameters?

You can check out [this section](https://scrapfly.io/docs/sdk/python#parameters) of the python SDK to see how to configure your calls

## Troubleshooting

#### Scrapy Checkup

 ```
scrapy check
```

 

   

 

#### Check API Key setting

 ```
scrapy settings --get SCRAPFLY_API_KEY
```

 

   

 

#### tls\_process\_server\_certificate - certificate verify failed

 ```
pip install --upgrade certifi
```

 

   

 

## Example: Scrapy Spider Demo

> The full example is available in our [ github repository ](https://github.com/scrapfly/python-scrapfly/tree/master/examples/scrapy/demo)

 ```
from scrapy import Item, Field
from scrapy.exceptions import CloseSpider
from scrapy.spidermiddlewares.httperror import HttpError
from twisted.python.failure import Failure

from scrapfly import ScrapeConfig
from scrapfly.errors import ScraperAPIError, ApiHttpServerError
from scrapfly.scrapy import ScrapflyScrapyRequest, ScrapflySpider, ScrapflyScrapyResponse


class Product(Item):

    name = Field()
    price = Field()
    description = Field()

    # scrapy.pipelines.images.ImagesPipeline
    image_urls = Field()
    images = Field()


class Demo(ScrapflySpider):
    name = "demo"

    allowed_domains = ["web-scraping.dev", "httpbin.dev"]
    start_urls = [
        ScrapeConfig("https://web-scraping.dev/product/1", render_js=True),
        ScrapeConfig("https://web-scraping.dev/product/2"),
        ScrapeConfig("https://web-scraping.dev/product/3"),
        ScrapeConfig("https://web-scraping.dev/product/4"),
        ScrapeConfig("https://web-scraping.dev/product/5", render_js=True),
        ScrapeConfig("https://httpbin.dev/status/403", asp=True, retry=False), # it will fail on purpose
        ScrapeConfig("https://httpbin.dev/status/400"), # it will fail on purpose - will fall on scrapy.spidermiddlewares.httperror.HttpError
        ScrapeConfig("https://httpbin.dev/status/404"), # it will fail on purpose - will fall on scrapy.spidermiddlewares.httperror.HttpError
    ]

    def start_requests(self):
        for scrape_config in self.start_urls:
            yield ScrapflyScrapyRequest(scrape_config, callback=self.parse, errback=self.error_handler, dont_filter=True)

    def error_handler(self, failure:Failure):
        if failure.check(ScraperAPIError): # The scrape errored
            error_code = failure.value.code # https://scrapfly.io/docs/scrape-api/errors#web_scraping_api_error

            if error_code == "ERR::ASP::SHIELD_PROTECTION_FAILED":
                self.logger.warning("The url %s must be retried" % failure.request.url)
        elif failure.check(HttpError): # The scrape succeed but the target server returned a non success http code >=400
            response:ScrapflyScrapyResponse = failure.value.response

            if response.status == 404:
                self.logger.warning("The url %s returned a 404 http code - Page not found" % response.url)
            elif response.status == 500:
                raise CloseSpider(reason="The target server returned a 500 http code - Website down")

        elif failure.check(ApiHttpServerError): # Generic API error, config error, quota reached, etc
            self.logger.error(failure)
        else:
            self.logger.error(failure)

    def parse(self, response:ScrapflyScrapyResponse, **kwargs):
        item = Product()

        if response.status == 200:
            # make sure the url is absolute
            item['image_urls'] = [response.urljoin(response.css('img.product-img::attr(src)').get())]

        item['name'] = response.css('h3.product-title').get()
        item['price'] = response.css('span.product-price::text').get()
        item['description'] = response.css('p.product-description').get()

        yield item

```

 

   

 

 ```
scrapy crawl demo -o product.csv
```