     [Answers](https://scrapfly.io/blog)   /  [http](https://scrapfly.io/blog/tag/http)   /  [How to add headers to every or some scrapy requests?](https://scrapfly.io/blog/answers/how-to-add-headers-to-every-or-some-scrapy-requests)   # How to add headers to every or some scrapy requests?

 by [Bernardas Alisauskas](https://scrapfly.io/blog/author/bernardas) Apr 23, 2023 1 min read [\#http](https://scrapfly.io/blog/tag/http) [\#scrapy](https://scrapfly.io/blog/tag/scrapy) 

 [  ](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fhow-to-add-headers-to-every-or-some-scrapy-requests "Share on LinkedIn")    

 

 

There are several ways to add headers in scrapy spiders. This can be done for each request manually:

python```python
class MySpider(scrapy.Spider):
    def parse(self, response):
        yield scrapy.Request(..., headers={"x-token": "123"})
```



However to automatically add headers to every or specific outgoing scrapy requests the `DEAFAULT_REQUEST_HEADERS` setting can be used:

python```python
# settings.py
DEFAULT_REQUEST_HEADERS = {
    "User-Agent": "my awesome scrapy robot",
}
```



In case more complex logic is needed like adding headers only to some requests or random User-Agent header a request middleware is the best option:

python```python
# middlewares.py
import random


class RandomUserAgentMiddleware:
    def __init__(self, user_agents):
        self.user_agents = user_agents

    @classmethod
    def from_crawler(cls, crawler):
        """retrieve user agent list from settings.USER_AGENTS"""
        user_agents = crawler.settings.get('USER_AGENTS', [])
        if not user_agents:
            raise ValueError('No user agents found in settings. Please provide a list of user agents in the USER_AGENTS setting.')
        return cls(user_agents)

    def process_request(self, request, spider):
        """attach random user agent to every outgoing request"""
        user_agent = random.choice(self.user_agents)
        request.headers.setdefault('User-Agent', user_agent)
        spider.logger.debug(f'Using User-Agent: {user_agent}')

# settings.py
MIDDLEWARES = {
    # ...
    'myproject.middlewares.RandomUserAgentMiddleware': 760,
    # ...
}

USER_AGENTS = [
    'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36',
    'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36',
    'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:54.0) Gecko/20100101 Firefox/54.0',
    'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36',
    # ...
]
```



Note that if you're using [Scrapfly's scrapy SDK](https://scrapfly.io/docs/sdk/scrapy) some headers like User-Agent string are automatically by the smart anti-blocking API.



 

    



Scale Your Web Scraping

Anti-bot bypass, browser rendering, and rotating proxies, all in one API. Start with 1,000 free credits.

  No credit card required  1,000 free API credits  Anti-bot bypass included 

 [Start Free](https://scrapfly.io/register) [View Docs](https://scrapfly.io/docs/onboarding) 

 Not ready? Get our newsletter instead. 

 

## Explore this Article with AI

 [ ChatGPT ](https://chat.openai.com/?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fhow-to-add-headers-to-every-or-some-scrapy-requests) [ Gemini ](https://www.google.com/search?udm=50&aep=11&q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fhow-to-add-headers-to-every-or-some-scrapy-requests) [ Grok ](https://x.com/i/grok?text=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fhow-to-add-headers-to-every-or-some-scrapy-requests) [ Perplexity ](https://www.perplexity.ai/search/new?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fhow-to-add-headers-to-every-or-some-scrapy-requests) [ Claude ](https://claude.ai/new?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fhow-to-add-headers-to-every-or-some-scrapy-requests) 



 ## Related Articles

 [  

 python xpath 

### Web Scraping With Scrapy: The Complete Guide in 2026

Tutorial on web scraping with scrapy and Python through a real world example project. Best practices, extension highligh...

 

 ](https://scrapfly.io/blog/posts/web-scraping-with-scrapy) [  

 curl 

### How to Use cURL GET Requests

Here's everything you need to know about cURL GET requests and some common pitfalls you should avoid.

 

 ](https://scrapfly.io/blog/posts/how-to-use-curl-get-requests) [  

 http tools 

### How to Use cURL For Web Scraping

In this article, we'll go over a step-by-step guide on sending and configuring HTTP requests with cURL. We'll also explo...

 

 ](https://scrapfly.io/blog/posts/how-to-use-curl-for-web-scraping) 

  ## Related Questions

- [ Q How to pass custom parameters to scrapy spiders? ](https://scrapfly.io/blog/answers/how-to-pass-parameters-to-scrapy-spiders-cli)
- [ Q How to rotate proxies in scrapy spiders? ](https://scrapfly.io/blog/answers/how-to-rotate-proxies-in-scrapy-spiders)
- [ Q How to use headless browsers with scrapy? ](https://scrapfly.io/blog/answers/how-to-use-headless-browsers-with-scrapy)
- [ Q How to pass data between scrapy callbacks in Scrapy? ](https://scrapfly.io/blog/answers/how-to-pass-data-between-scrapy-callbacks)
 
  



   



 Scale your web scraping effortlessly, **1,000 free credits** [Start Free](https://scrapfly.io/register)