     [Answers](https://scrapfly.io/blog)   /  [requests](https://scrapfly.io/blog/tag/requests)   /  [How to configure Python requests to use a proxy?](https://scrapfly.io/blog/answers/python-requests-proxy-intro)   # How to configure Python requests to use a proxy?

 by [Bernardas Alisauskas](https://scrapfly.io/blog/author/bernardas) Dec 19, 2022 1 min read [\#requests](https://scrapfly.io/blog/tag/requests) 

 [  ](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fpython-requests-proxy-intro "Share on LinkedIn")    

 

 

Python's [requests](https://pypi.org/project/requests/) package supports both HTTP and SOCKS5 proxies which can be set for each request or the whole script:

python```python
import requests

# proxy pattern is:
# scheme://username:password@IP:PORT
# For example:
# no auth HTTP proxy:
my_proxy = "http://160.11.12.13:1020"
# or socks5
my_proxy = "socks://160.11.12.13:1020"
# proxy with authentication
my_proxy = "http://my_username:my_password@160.11.12.13:1020"
# note: that username and password should be url quoted if they contain URL sensitive characters like "@":
from urllib.parse import quote
my_proxy = f"http://{quote('foo@bar.com')}:{quote('password@123')}@160.11.12.13:1020"


proxies = {
    # this proxy will be applied to all http:// urls
    'http': 'http://160.11.12.13:1020',
    # this proxy will be applied to all https:// urls (not the S)
    'https': 'http://160.11.12.13:1020',
    # we can also use proxy only for specific pages
    'https://httpbin.dev': 'http://160.11.12.13:1020',
}
requests.get("https://httpbin.dev/ip", proxies=proxies)
```



Note that proxy can also be set through the standard `*_PROXY` environment variables:

shell```shell
$ export HTTP_PROXY="http://160.11.12.13:1020"
$ export HTTPS_PROXY="http://160.11.12.13:1020"
$ export ALL_PROXY="socks://160.11.12.13:1020"
$ python
import requests
# this will use the proxies we set
requests.get("https://httpbin.dev/ip")
```



Finally, when web scraping using proxies we should rotate proxies for each request. See our [how to rotate proxies](https://scrapfly.io/blog/posts/how-to-rotate-proxies-in-web-scraping) guide for more. For more on proxies see [introduction to proxies in web scraping](https://scrapfly.io/blog/posts/introduction-to-proxies-in-web-scraping)



 

    



Scale Your Web Scraping

Anti-bot bypass, browser rendering, and rotating proxies, all in one API. Start with 1,000 free credits.

  No credit card required  1,000 free API credits  Anti-bot bypass included 

 [Start Free](https://scrapfly.io/register) [View Docs](https://scrapfly.io/docs/onboarding) 

 Not ready? Get our newsletter instead. 

 

## Explore this Article with AI

 [ ChatGPT ](https://chat.openai.com/?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fpython-requests-proxy-intro) [ Gemini ](https://www.google.com/search?udm=50&aep=11&q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fpython-requests-proxy-intro) [ Grok ](https://x.com/i/grok?text=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fpython-requests-proxy-intro) [ Perplexity ](https://www.perplexity.ai/search/new?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fpython-requests-proxy-intro) [ Claude ](https://claude.ai/new?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fpython-requests-proxy-intro) 



 ## Related Articles

 [     

 api 

### Guide to Google News API and Alternatives

In a world of endless information, accessing news data efficiently can be vital for many businesses. Google News has bee...

 

 ](https://scrapfly.io/blog/posts/guide-to-google-news-api-and-alternatives) [  

 http python 

### Guide to Python requests POST method

Discover how to use Python's requests library for POST requests, including JSON, form data, and file uploads, along with...

 

 ](https://scrapfly.io/blog/posts/how-to-python-requests-post) [  

 http blocking 

### What is HTTP Error 429 Too Many Request and How to Fix it

HTTP 429 is an infamous response code that indicates request throttling or distribution is needed. Let's take a look at ...

 

 ](https://scrapfly.io/blog/posts/what-is-http-error-429-too-many-requests) 

  ## Related Questions

- [ Q How to use CSS selectors in NodeJS when web scraping? ](https://scrapfly.io/blog/answers/how-to-use-css-selectors-in-nodejs)
- [ Q How to fix Python requests MissingSchema error? ](https://scrapfly.io/blog/answers/python-requests-exception-missingschema)
- [ Q How to use XPath selectors in Python? ](https://scrapfly.io/blog/answers/how-to-use-xpath-selectors-in-python)
- [ Q How to wait for page to load in Playwright? ](https://scrapfly.io/blog/answers/how-to-wait-for-page-to-load-in-playwright)
 
  



   



 Scale your web scraping effortlessly, **1,000 free credits** [Start Free](https://scrapfly.io/register)