     [Blog](https://scrapfly.io/blog)   /  [blocking](https://scrapfly.io/blog/tag/blocking)   /  [How to Rate Limit Async Requests in Python](https://scrapfly.io/blog/posts/how-to-rate-limit-asynchronous-python-requests)   # How to Rate Limit Async Requests in Python

 by [Bernardas Alisauskas](https://scrapfly.io/blog/author/bernardas) Apr 10, 2026 3 min read [\#blocking](https://scrapfly.io/blog/tag/blocking) [\#python](https://scrapfly.io/blog/tag/python) 

 [  ](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-rate-limit-asynchronous-python-requests "Share on LinkedIn")    

 

 

   

When web scraping, we're often limited by the website's technical capabilities. We can't scrape too fast without being blocked or overwhelming smaller websites.

Asynchronous HTTP clients like [httpx](https://pypi.org/project/httpx/) allow us to easily make hundreds of requests per second but don't provide ways to fine-tune our scraping speed. So, in this quick tutorial, we'll be taking a look at how to rate-limit asynchronous HTTP connections to slow down our scrapers.

## Key Takeaways

Master async Python rate limiting with advanced throttling techniques, connection management, and performance optimization for comprehensive web scraping operations.

- Implement aiometer library for precise request rate limiting with concurrent connection control
- Configure httpx.Limits for connection pooling and concurrent request management
- Use asyncio.Semaphore for advanced throttling with custom rate limiting algorithms
- Implement request spacing and delay mechanisms to avoid overwhelming target servers
- Configure specialized tools like ScrapFly for automated rate limiting with anti-blocking features
- Monitor request patterns and implement adaptive throttling for optimal scraping performance

**Get web scraping tips in your inbox**Trusted by 100K+ developers and 30K+ enterprises. Unsubscribe anytime.





## Python httpx

HTTPX is the most popular asynchronous HTTP client in Python which can be installed using `pip install` terminal command:

shell```shell
$ pip install httpx
```



HTTPX supports both synchronous and asynchronous HTTP clients. It's unlikely that we need to throttle synchronous connections as they are very slow to begin so let's take a look at the throttling options we have for the asynchronous client.

To limit httpx client we can use the `httpx.Limit` object:

python```python
import httpx
session = httpx.AsyncClient(
    limits=httpx.Limits(
        max_connections=5  # we can change max connection count here
    )
)
```



However limiting scraping by connection count is very inaccurate - on slower websites 5 connections might only manage a few requests per minute while on fast ones it could reach hundreds of requests per minute.

To limit httpx-powered scrapers we need an additional layer that tracks requests themselves rather than connections. Let's take a look at the most popular throttling library - `aiometer`.

## Python aiometer

The most popular way to limit all asynchronous tasks in Python is [aiometer](https://pypi.org/project/aiometer/) which can be installed using the `pip install` terminal command:

shell```shell
$ pip install aiometer
```



Then we can schedule all of our requests to run through aiometer limiter:

python```python
import asyncio
from time import time

import aiometer
import httpx

session = httpx.AsyncClient()


async def scrape(url):
    response = await session.get(url)
    return response


async def run():
    _start = time()
    urls = ["https://httpbin.dev/html" for i in range(10)]
    results = await aiometer.run_on_each(
        scrape, 
        urls,
        max_per_second=1,  # here we can set max rate per second
    )
    print(f"finished {len(urls)} requests in {time() - _start:.2f} seconds")
    return results


if __name__ == "__main__":
    asyncio.run(run())

# will print:
# finished 10 requests in 9.54 seconds
```



In our small example scraper we used `aiometer.run_on_each` function to limit 10 scrape requests at 1 request per second. With this one command, we can throttle our scraper to exact requests per second speed!

## Scraping Faster Without Rate Limiting?

Some websites impose extremely low-speed limits for web scrapers making data collection impossible. To handle this [web scraping APIs](https://scrapfly.io/web-scraping-api) like ScrapFly can be used to scrape faster and avoid blocking.

ScrapFly offers dozens of different features that can help with scraper scaling, like:

- [Anti Scraping Protection Bypass](https://scrapfly.io/docs/scrape-api/anti-scraping-protection)
- [Javascript Rendering](https://scrapfly.io/docs/scrape-api/javascript-rendering)
- [190M Pool of Residential or Mobile Proxies](https://scrapfly.io/docs/scrape-api/proxy)

To use ScrapFly in Python install [ScrapFly-sdk](https://scrapfly.io/docs/sdk/python) package using the `pip install scrapfly-sdk` terminal command. Then targets can be scraped without the blocking:

python```python
from scrapfly import ScrapflyClient, ScrapeConfig
client = ScrapflyClient(
    key="YOUR SCRAPFLY KEY",
    max_concurrency=10,  # we can limit concurrent requests if needed
)
result = client.scrape(ScrapeConfig(
    url="https://httpbin.dev/ip",
    # optional features like:
    # - we can select specific proxy country
    country="GB",
    # - and enable anti scraping protection bypass:
    asp=True,
    # see https://scrapfly.io/docs/scrape-api/getting-started for more
))
```





 

    Table of Contents- [Key Takeaways](#key-takeaways)
- [Python httpx](#python-httpx)
- [Python aiometer](#python-aiometer)
- [Scraping Faster Without Rate Limiting?](#scraping-faster-without-rate-limiting)
 
    Join the Newsletter  Get monthly web scraping insights 

 

  



Scale Your Web Scraping

Anti-bot bypass, browser rendering, and rotating proxies, all in one API. Start with 1,000 free credits.

  No credit card required  1,000 free API credits  Anti-bot bypass included 

 [Start Free](https://scrapfly.io/register) [View Docs](https://scrapfly.io/docs/onboarding) 

 Not ready? Get our newsletter instead. 

 

## Explore this Article with AI

 [ ChatGPT ](https://chat.openai.com/?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-rate-limit-asynchronous-python-requests) [ Gemini ](https://www.google.com/search?udm=50&aep=11&q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-rate-limit-asynchronous-python-requests) [ Grok ](https://x.com/i/grok?text=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-rate-limit-asynchronous-python-requests) [ Perplexity ](https://www.perplexity.ai/search/new?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-rate-limit-asynchronous-python-requests) [ Claude ](https://claude.ai/new?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-rate-limit-asynchronous-python-requests) 



 ## Related Articles

 [  

 python scaling 

### Web Scraping Speed: Processes, Threads and Async

Scaling web scrapers can be difficult - in this article we'll go over the core principles like subprocesses, threads and...

 

 ](https://scrapfly.io/blog/posts/web-scraping-speed) [  

 http 

### What is HTTP 413 Error? (Payload Too Large)

HTTP status code 413 generally means that POST or PUT data is too large. Let's take a look at how to handle this.

 

 ](https://scrapfly.io/blog/posts/http-error-413-payload-too-large) [  

 python scrapeguide 

### How to Scrape YellowPages.com in 2026

Tutorial on how to scrape yellowpages.com business and review data using Python. How to avoid blocking to scrape data at...

 

 ](https://scrapfly.io/blog/posts/how-to-scrape-yellowpages) 

  ## Related Questions

- [ Q What is Asynchronous Web Scraping? ](https://scrapfly.io/blog/answers/what-is-asynchronous-web-scraping)
 
  



   



 Bypass anti-bot protection automatically, **1,000 free credits** [Start Free](https://scrapfly.io/register)