     [Answers](https://scrapfly.io/blog)   /  [data-parsing](https://scrapfly.io/blog/tag/data-parsing)   /  [How to use XPath selectors in Python?](https://scrapfly.io/blog/answers/how-to-use-xpath-selectors-in-python)   # How to use XPath selectors in Python?

 by [Bernardas Alisauskas](https://scrapfly.io/blog/author/bernardas) Oct 31, 2022 1 min read [\#data-parsing](https://scrapfly.io/blog/tag/data-parsing) [\#python](https://scrapfly.io/blog/tag/python) [\#xpath](https://scrapfly.io/blog/tag/xpath) 

 [  ](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fhow-to-use-xpath-selectors-in-python "Share on LinkedIn")    

 

 

The most popular package that implements XPath selectors in Python is [lxml](https://pypi.org/project/lxml/). We can use the `xpath()` method to find all matching values:

python```python
from lxml import etree

tree = etree.fromstring("""
<div>
    <a>link 1</a>
    <a>link 2</a>
</div>
""")
for result in tree.xpath("//a"):
    print(result.text)
"link 1"
"link 2"
```



However, in web scraping the **recommended** way is to use the [parsel](https://pypi.org/project/parsel/) package. It's based on `lxml` and providers a more consistent behavior when working with HTML content:

python```python
from parsel import Selector

selector = Selector("""
<div>
    <a>link 1</a>
    <a>link 2</a>
</div>
""")

selector.xpath("//a").getall()
['<a>link 1</a>', '<a>link 2</a>']
```





 

    



Scale Your Web Scraping

Anti-bot bypass, browser rendering, and rotating proxies, all in one API. Start with 1,000 free credits.

  No credit card required  1,000 free API credits  Anti-bot bypass included 

 [Start Free](https://scrapfly.io/register) [View Docs](https://scrapfly.io/docs/onboarding) 

 Not ready? Get our newsletter instead. 

 

## Explore this Article with AI

 [ ChatGPT ](https://chat.openai.com/?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fhow-to-use-xpath-selectors-in-python) [ Gemini ](https://www.google.com/search?udm=50&aep=11&q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fhow-to-use-xpath-selectors-in-python) [ Grok ](https://x.com/i/grok?text=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fhow-to-use-xpath-selectors-in-python) [ Perplexity ](https://www.perplexity.ai/search/new?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fhow-to-use-xpath-selectors-in-python) [ Claude ](https://claude.ai/new?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fhow-to-use-xpath-selectors-in-python) 



 ## Related Articles

 [  

 python tools 

### Python lxml Tutorial: How to Parse HTML and XML

In this tutorial, we'll take a deep dive into lxml, a powerful Python library that allows for parsing HTML and XML effec...

 

 ](https://scrapfly.io/blog/posts/intro-to-parsing-html-xml-python-lxml) [  

 data-parsing parsel 

### Guide to Parsel - the Best HTML Parsing in Python

Learn to extract data from websites with Parsel, a Python library for HTML parsing using CSS selectors and XPath.

 

 ](https://scrapfly.io/blog/posts/guide-to-html-parsing-with-parsel-python) [  

 python data-parsing 

### Parsing HTML with Xpath

Introduction to xpath in the context of web-scraping. How to extract data from HTML documents using xpath, best practice...

 

 ](https://scrapfly.io/blog/posts/parsing-html-with-xpath) 

  ## Related Questions

- [ Q What are some BeautifulSoup alternatives in Python? ](https://scrapfly.io/blog/answers/what-are-some-beautifulsoup-alternatives)
- [ Q How to use CSS selectors in NodeJS when web scraping? ](https://scrapfly.io/blog/answers/how-to-use-css-selectors-in-nodejs)
- [ Q What are some ways to parse JSON datasets in Python? ](https://scrapfly.io/blog/answers/what-are-some-ways-to-parse-json-datasets-in-python)
- [ Q How to wait for page to load in Playwright? ](https://scrapfly.io/blog/answers/how-to-wait-for-page-to-load-in-playwright)
 
  



   



 Extract structured data with AI, **1,000 free credits** [Start Free](https://scrapfly.io/register)