     [Answers](https://scrapfly.io/blog)   /  [scrapy](https://scrapfly.io/blog/tag/scrapy)   /  [How to pass data from start\_requests to parse callbacks in scrapy?](https://scrapfly.io/blog/answers/how-to-pass-data-from-start-request-to-callbacks-scrapy)   # How to pass data from start\_requests to parse callbacks in scrapy?

 by [Bernardas Alisauskas](https://scrapfly.io/blog/author/bernardas) Apr 20, 2023 1 min read [\#scrapy](https://scrapfly.io/blog/tag/scrapy) 

 [  ](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fhow-to-pass-data-from-start-request-to-callbacks-scrapy "Share on LinkedIn")    

 

 

Scrapy is a callback driver web scraping framework that can make it difficult to pass data from the initial `start_requests()` method to the `parse()` callback and any callbacks that follow.

To start, to transfer data to the `parse()` callback from the initial `start_requests()` method the `Request.meta` attribute can be used:

python```python
import scrapy

class MySpider(scrapy.Spider):
    name = 'myspider'

    def start_requests(self):
        urls = [...]
        for index, url in enumerate(urls):
            yield scrapy.Request(url, meta={'index':index})

    def parse(self, response):
        print(response.url)
        print(response.meta['index'])
```



In the example above we are using `Request.meta` parameter and pass the index of URL that has been scheduled to be scraped.

We can continue with this `Request.meta` pipeline and pass data between callbacks indefinitely until we reach the final callback where we can return the final item:

python```python
import scrapy

class MySpider(scrapy.Spider):
    name = 'myspider'

    def start_requests(self):
        urls = [...]
        for index, url in enumerate(urls):
            yield scrapy.Request(url, meta={'item':{"index": index}})

    def parse(self, response):
        item = response.meta['item']
        item['price'] = 100
        yield scrapy.Request(".../reviews", meta={"item": item}, callback=self.parse_reviews)

    def parse_reviews(sefl, response):
        item = response.meta['item']
        item['reviews'] = ['awesome']
        yield item
```



In the example above we've extended our chain to generate a single item from 2 requests.

Note that when using callback chaining with a single result item we should be dilligent to handle failure with `errback` parameter because item could be lost at any step of the way.

Additionally, it's best to pass immutable or low-reference data to avoid unexpected behavior and potential memory leak problems.



 

    



Scale Your Web Scraping

Anti-bot bypass, browser rendering, and rotating proxies, all in one API. Start with 1,000 free credits.

  No credit card required  1,000 free API credits  Anti-bot bypass included 

 [Start Free](https://scrapfly.io/register) [View Docs](https://scrapfly.io/docs/onboarding) 

 Not ready? Get our newsletter instead. 

 

## Explore this Article with AI

 [ ChatGPT ](https://chat.openai.com/?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fhow-to-pass-data-from-start-request-to-callbacks-scrapy) [ Gemini ](https://www.google.com/search?udm=50&aep=11&q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fhow-to-pass-data-from-start-request-to-callbacks-scrapy) [ Grok ](https://x.com/i/grok?text=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fhow-to-pass-data-from-start-request-to-callbacks-scrapy) [ Perplexity ](https://www.perplexity.ai/search/new?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fhow-to-pass-data-from-start-request-to-callbacks-scrapy) [ Claude ](https://claude.ai/new?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fhow-to-pass-data-from-start-request-to-callbacks-scrapy) 



 ## Related Articles

 [  

 python xpath 

### Web Scraping With Scrapy: The Complete Guide in 2026

Tutorial on web scraping with scrapy and Python through a real world example project. Best practices, extension highligh...

 

 ](https://scrapfly.io/blog/posts/web-scraping-with-scrapy) [  

 python scrapeguide 

### How to scrape Threads by Meta using Python (2026 Update)

Guide how to scrape Threads - new social media network by Meta and Instagram - using Python and popular libraries like P...

 

 ](https://scrapfly.io/blog/posts/how-to-scrape-threads) [  

 curl 

### How to Use cURL GET Requests

Here's everything you need to know about cURL GET requests and some common pitfalls you should avoid.

 

 ](https://scrapfly.io/blog/posts/how-to-use-curl-get-requests) 

  ## Related Questions

- [ Q How to pass data between scrapy callbacks in Scrapy? ](https://scrapfly.io/blog/answers/how-to-pass-data-between-scrapy-callbacks)
- [ Q How to select elements by attribute using CSS selectors? ](https://scrapfly.io/blog/answers/how-to-select-elements-by-attribute-containing-value-css-selectors)
- [ Q How to find HTML elements by attribute using BeautifulSoup? ](https://scrapfly.io/blog/answers/how-to-find-html-elements-by-attribute-with-beautifulsoup)
- [ Q How to pass custom parameters to scrapy spiders? ](https://scrapfly.io/blog/answers/how-to-pass-parameters-to-scrapy-spiders-cli)
 
  



   



 Scale your web scraping effortlessly, **1,000 free credits** [Start Free](https://scrapfly.io/register)