How to scrape images from a website?

To scrape images from a website we can use Python with HTML parsing tools like beautifulsoup to select all <img> elements and save them.

Here's an example using httpx and beautifulsoup (install using pip install httpx beautifulsoup4):

import asyncio
import httpx
from bs4 import BeautifulSoup
from pathlib import Path

async def download_image(url, filepath, client):
    response = await client.get(url)
    print(f"Downloaded {url} to {filepath}")

async def scrape_images(url):
    download_dir = Path('images')
    download_dir.mkdir(parents=True, exist_ok=True)

    async with httpx.AsyncClient() as client:
        response = await client.get(url)
        soup = BeautifulSoup(response.text, "html.parser")
        download_tasks = []
        for img_tag in soup.find_all("img"):
            img_url = img_tag.get("src")  # get image url
            if img_url:
                img_url = response.url.join(img_url)  # turn url absolute
                img_filename = download_dir / Path(str(img_url)).name
                    download_image(img_url, img_filename, client)
        await asyncio.gather(*download_tasks)

# example - scrape all scrapfly blog images:
url = ""

Above we are using httpx.AsyncClient to first retrieve the target page HTML. Then, we extract all src attributes of all <img> elements. Finally, we download all images concurrently and save them to ./images directory.

Question tagged: Python

Related Posts

How to Web Scrape with HTTPX and Python

Intro to using Python's httpx library for web scraping. Proxy and user agent rotation and common web scraping challenges, tips and tricks.

How to Scrape for Fashion Apparel Data in Python is a rising storefront for luxury fashion apparel items. It's known for high quality apparel data so in this tutorial we'll take a look how to scrape it using Python.

How to Scrape Fashionphile for Second Hand Fashion Data

In this fashion scrapeguide we'll be taking a look at Fashionphile - another major 2nd hand luxury fashion marketplace. We'll be using Python and hidden web data scraping to grap all of this data in just few lines of code.