     [Answers](https://scrapfly.io/blog)   /  [beautifulsoup](https://scrapfly.io/blog/tag/beautifulsoup)   /  [How to scrape tables with BeautifulSoup?](https://scrapfly.io/blog/answers/how-to-scrape-tables-with-beautifulsoup)   # How to scrape tables with BeautifulSoup?

 by [Bernardas Alisauskas](https://scrapfly.io/blog/author/bernardas) Apr 18, 2026 2 min read [\#beautifulsoup](https://scrapfly.io/blog/tag/beautifulsoup) [\#data-parsing](https://scrapfly.io/blog/tag/data-parsing) 

 [  ](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fhow-to-scrape-tables-with-beautifulsoup "Share on LinkedIn")    

 

 

HTML tables are commonly found across different web pages. They exist as a data frame on the web page. In this guide, we'll explain how to scrape an HTML table with BeautifulSoup as the parsing library through a real-life example. Let's get started!



## Setup

Before we start, let's ensure the required libraries are installed.First, let's install the BeautifulSoup package using the `pip` terminal command:

shell```shell
pip install beautifulsoup4
```



As for the HTTP client, we'll be using the built-in `requests` Python library. However, it can be replaced with any other client, such as [httpx](https://scrapfly.io/blog/posts/web-scraping-with-python-httpx/).



Scrapfly

#### Extract structured data automatically?

Scrapfly's Extraction API uses AI to turn any webpage into structured data — no selectors needed.

[Try Free →](https://scrapfly.io/register)## Retrieve Table Data

o start, let's have a look at our target table. We'll be using the target table classes [web-scraping.dev/product/1](https://web-scraping.dev/product/1):



We'll request the above page to retrieve the tables data available in the HTML:

python```python
from bs4 import BeautifulSoup
import requests 

response = requests.get("https://web-scraping.dev/product/1")
html = response.text

# Create the soup object
soup = BeautifulSoup(html, "lxml")
```



Above, we start by requesting the target webpage to retive the HTML tables. Then, we use BeautifulSoup to create a parser object.



## Parse HTML Tables

The BeautifulSoup package uses [CSS](https://scrapfly.io/blog/posts/parsing-html-with-css/) selectors to select HTML elements. Hence, we'll target the table class, and then iterate over its rows:

python```python
from bs4 import BeautifulSoup
import requests 

response = requests.get("https://web-scraping.dev/product/1")
html = response.text

soup = BeautifulSoup(html, "lxml")

# First, select the desried table element (the 2nd one on the page)
table = soup.find_all('table', {'class': 'table-product'})[1]

headers = []
rows = []
for i, row in enumerate(table.find_all('tr')):
    if i == 0:
        headers = [el.text.strip() for el in row.find_all('th')]
    else:
        rows.append([el.text.strip() for el in row.find_all('td')])
```



Above, we first use the `find_all` method to find all table elements and select the second table on the page. Then, we find each table row and iterate through them extracting their text contents. As for the `i == 0` condition, we use it to extract the table header rows, as it's first row in our BeautifulSoup table.

Here are what the results we got should look like:

python```python
print(headers)
['Version', 'Package Weight', 'Package Dimension', 'Variants', 'Delivery Type']
for row in rows:
    print(row)
    ['Pack 1', '1,00 kg', '100x230 cm', '6 available', '1 Day shipping']
    ['Pack 2', '2,11 kg', '200x460 cm', '6 available', '1 Day shipping']
    ['Pack 3', '3,22 kg', '300x690 cm', '6 available', '1 Day shipping']
    ['Pack 4', '4,33 kg', '400x920 cm', '6 available', '1 Day shipping']
    ['Pack 5', '5,44 kg', '500x1150 cm', '6 available', '1 Day shipping']
```



For more details on parsing with BeautifulSoup, refer to our dedicated guide.

[How to Parse Web Data with Python and BeautifulsoupBeautifulsoup is one the most popular libraries in web scraping. In this tutorial, we'll take a hand-on overview of how to use it, what is it good for and explore a real -life web scraping example.](https://scrapfly.io/blog/posts/web-scraping-with-python-beautifulsoup)



 

    Table of Contents- [Setup](#setup)
- [Retrieve Table Data](#retrieve-table-data)
- [Parse HTML Tables](#parse-html-tables)
 
    Join the Newsletter  Get monthly web scraping insights 

 

  



Scale Your Web Scraping

Anti-bot bypass, browser rendering, and rotating proxies, all in one API. Start with 1,000 free credits.

  No credit card required  1,000 free API credits  Anti-bot bypass included 

 [Start Free](https://scrapfly.io/register) [View Docs](https://scrapfly.io/docs/onboarding) 

 Not ready? Get our newsletter instead. 

 

## Explore this Article with AI

 [ ChatGPT ](https://chat.openai.com/?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fhow-to-scrape-tables-with-beautifulsoup) [ Gemini ](https://www.google.com/search?udm=50&aep=11&q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fhow-to-scrape-tables-with-beautifulsoup) [ Grok ](https://x.com/i/grok?text=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fhow-to-scrape-tables-with-beautifulsoup) [ Perplexity ](https://www.perplexity.ai/search/new?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fhow-to-scrape-tables-with-beautifulsoup) [ Claude ](https://claude.ai/new?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fhow-to-scrape-tables-with-beautifulsoup) 



 ## Related Articles

 [  

 python data-parsing 

### How to Parse Web Data with Python and Beautifulsoup

Beautifulsoup is one the most popular libraries in web scraping. In this tutorial, we'll take a hand-on overview of how ...

 

 ](https://scrapfly.io/blog/posts/web-scraping-with-python-beautifulsoup) [  

 python ai 

### What is Parsing? From Raw Data to Insights

Learn about the fundamentals of parsing data, across formats like JSON, XML, HTML, and PDFs. Learn how to use Python par...

 

 ](https://scrapfly.io/blog/posts/what-is-parsing-turning-data-into-insights) [  

 data-parsing css-selectors 

### Parsing HTML with CSS Selectors

Introduction to using CSS selectors to parse web-scraped content. Best practices, available tools and common challenges ...

 

 ](https://scrapfly.io/blog/posts/parsing-html-with-css) 

  ## Related Questions

- [ Q How to find all links using BeautifulSoup and Python? ](https://scrapfly.io/blog/answers/how-to-find-all-links-using-beautifulsoup)
- [ Q How to scrape HTML table to Excel Spreadsheet (.xlsx)? ](https://scrapfly.io/blog/answers/html-table-to-xlsx-python-beautifulsoup)
- [ Q How to find sibling HTML nodes using BeautifulSoup and Python? ](https://scrapfly.io/blog/answers/how-to-find-siblings-nodes-with-beautifulsoup)
- [ Q How to select values between two nodes in BeautifulSoup and Python? ](https://scrapfly.io/blog/answers/how-to-select-values-between-two-elements-in-beautifulsoup)
 
  



   



 Extract structured data with AI, **1,000 free credits** [Start Free](https://scrapfly.io/register)