     [Blog](https://scrapfly.io/blog)   /  [project](https://scrapfly.io/blog/tag/project)   /  [How to Scrape Google SEO Keyword Data and Rankings](https://scrapfly.io/blog/posts/web-scraping-google-seo-keywords)   # How to Scrape Google SEO Keyword Data and Rankings

 by [Mazen Ramadan](https://scrapfly.io/blog/author/mazen) Apr 10, 2026 10 min read [\#project](https://scrapfly.io/blog/tag/project) [\#python](https://scrapfly.io/blog/tag/python) [\#seo](https://scrapfly.io/blog/tag/seo) 

 [  ](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fweb-scraping-google-seo-keywords "Share on LinkedIn")    

 

 

   

SEO keywords are an essential part of Search Engine Optimization for ranking higher on search results. However, identifying the right SEO keywords can be quite challenging, involving lots of research and analysis. All of which can be made easier by applying web scraping for SEO.

In this article, we'll take a look at SEO web scraping - what it is and how to use it for better SEO keyword optimization. We'll also create an SEO keyword scraper that scrapes Google search rankings and suggested keywords. Let's dive in!

## Key Takeaways

Master Google SEO keyword scraping with advanced Python techniques, search ranking analysis, and keyword research automation for comprehensive SEO optimization.

- Implement Google search scraping with Python for keyword research and ranking analysis
- Configure SEO keyword extraction including short-tail, long-tail, and LSI keywords
- Implement search ranking monitoring and competitor analysis for SEO optimization
- Configure proxy rotation and fingerprint management to avoid detection and rate limiting
- Use specialized tools like ScrapFly for automated Google SEO scraping with anti-blocking features
- Implement data analysis and visualization for comprehensive SEO keyword research workflows

[**View Source Code**github.com/scrapfly/scrapfly-scrapers/tree/main/google-scraper](https://github.com/scrapfly/scrapfly-scrapers/tree/main/google-scraper)

**Get web scraping tips in your inbox**Trusted by 100K+ developers and 30K+ enterprises. Unsubscribe anytime.







## What are SEO keywords?

Search engine optimization (SEO) keywords are words or phrases that users type into search engines to get results. These keywords are generally categorized into three types:

- Short-tail keywords are broad, generic popular terms that have high search volume.
- Long-tail keywords are more specific, less popular terms that have low search volume.
- Latent Semantic Indexing (LSI) keywords are semantically related to the main keyword. For example, "web scraping" and "web scraper" are LSI keywords that are related to the main "web scraping google" short-tail keyword.

Each keyword can be a single word, a multi-word or even a long phrase. However, all keywords are processed for stopwords and punctuation. For example, the keyword "web scraping using typescript" will be processed as "web scraping typescript" as the word "using" is a low-value word known as a stopword.

Search engines use SEO keywords to rank results on the search engine results pages (SERPs). Thus, SEO keyword-optimized websites get more traffic and revenue than other websites as they rank higher in the search results.

So, how exactly do search engines use SEO to rank search engine results?



## How SEO Affects Search Results?

Search engine result pages (SERPs) are what users see when they search for keywords using search engines. These search results are ranked by the search engine based on several factors, including:

- **Relevance** of the search keyword to the topic.
- **Content quality** on the web page, including readability and information accuracy.
- **Geographic distance** between users and local business locations.
- **Compatibility** and responsive design.
- **Domain authorization** of the website, including the number of quality backlinks, domain age and overall reputation of the website.
- **Updated content** as search engines may set higher ranks for newly updated content for certain topics like news or trends.

Search engine results are also personalized based on the browser's history and location expressed through HTTP cookies and the client's IP address.

Meaning, that by scraping SEO keyword rankings we can better understand how search engines rank results and boost our own rankings.

In the following sections, we'll scrape Google search results for keyword rankings along with SEO metadata. But before that, let's take a look at the tools we'll use.



## Setup

In this SEO web scraping guide, we'll scrape SEO keywords related to the [Typescript web scraping](https://scrapfly.io/blog/posts/ultimate-intro-to-web-scraping-with-typescript) topic. We'll also scrape SEO metadata, such as suggestions and related questions.

To do that, we'll use [httpx](https://pypi.org/project/httpx/) for sending HTTP requests and [parsel](https://pypi.org/project/parsel/) for [parsing HTML using XPath](https://scrapfly.io/blog/posts/parsing-html-with-xpath). These Python packages can be installed using the `pip` terminal command:

```
pip install httpx parsel
```





## Scrape SEO Keyword Rankings

To scrape SEO keyword rankings, we'll search for SEO keywords to get the search results for each keyword. Then, we'll [scrape Google search](https://scrapfly.io/blog/posts/how-to-scrape-google) page to get the rank of each result box. With this scraping tool, we'll be able to monitor competitors and gain insights to select SEO keywords effectively.

For comprehensive competitor analysis beyond keyword rankings, tools like [SimilarWeb scraping](https://scrapfly.io/blog/posts/how-to-scrape-similarweb) provide website traffic analytics, audience insights, and market intelligence to complement your SEO strategy.

Let's start by creating an `httpx` client and a CSV file for storing the scraping results:

python```python
import httpx
import csv

client = httpx.Client(
    headers={
        "Accept": "*/*",
        "Accept-Language": "en-US,en;q=0.9",
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36"
    },
)

filecsv = open("seo_ranks.csv", "w", encoding="utf8")
csv_columns = ["keyword", "title", "text", "date", "domain", "url", "position"]
writer = csv.DictWriter(filecsv, fieldnames=csv_columns)
writer.writeheader()
```



Above, we create a CSV file named `seo_ranks.csv` and add the fields we'll scrape as column names. For our `httpx.Client` we use basic headers like `Accept` and [User-Agent string](https://scrapfly.io/blog/posts/user-agent-header-in-web-scraping) to avoid being instantly blocked.

Next, we'll iterate over SEO keywords and scrape ranking results for each keyword:

python```python
from parsel import Selector
from urllib.parse import quote
from typing import List


def scrape_seo_ranks(keywords: List, max_pages: int):
    for keyword in keywords:
        position = 0
        # Iterate over Google search pages
        for page in range(1, max_pages + 1):
            print(f"scraping keyword {keyword} at page number {page}")

            url = f"https://www.google.com/search?hl=en&q={quote(keyword)}" + (
                f"&start={10*(page-1)}" if page > 1 else ""
            )
            request = client.get(url=url)
            selector = Selector(text=request.text)
            for result_box in selector.xpath(
                "//h1[contains(text(),'Search Results')]/following-sibling::div[1]/div"
            ):
                # Scrape search result boxes only
                try:
                    title = result_box.xpath(".//h3/text()").get()
                    text = "".join(
                        result_box.xpath(".//div[@data-sncf]//text()").getall()
                    )
                    date = text.split("—")[0] if len(text.split("—")) > 1 else "None"
                    url = result_box.xpath(".//h3/../@href").get()
                    domain = url.split("/")[2].replace("www.", "")
                    position += 1
                    writer.writerow(
                        {
                            "keyword": keyword,
                            "title": title,
                            "text": text,
                            "date": date,
                            "domain": domain,
                            "url": url,
                            "position": position,
                        }
                    )
                except:
                    pass

# Example use:
scrape_seo_ranks(["Web Scraping using Typescript", "Typescript web scraper"], 1)
```



Here, we use `httpx` to send a request to the first Google search page of each SEO keyword. Then, since Google's HTML structure is complex and unstable, we use [text-based XPath parsing](https://scrapfly.io/blog/answers/how-to-select-elements-by-text-in-xpath) using `parsel`. Finally, we save the data we got into the CSV file we created earlier:



example CSV outputCool! We got the search result rankings for each SEO keyword along with the title, text, date, domain and URL.

Now that our SEO keyword scraper can scrape search result ranks, let's extend it to scrape SEO keyword suggestions as well.



Scrapfly

#### Scale your web scraping effortlessly

Scrapfly handles proxies, browsers, and anti-bot bypass — so you can focus on data.

[Try Free →](https://scrapfly.io/register)## Scrape SEO Keyword Suggestions

Google search suggestions include related topics to the search keyword. It includes sections like "People also ask" and "Related searches" - which provide keywords related to the main search keyword:



Above, we can see that Google suggests additional keywords and related questions. We can use this data to extend our own keyword pool and include related questions in our content to rank higher in the search results.

Let's extend our SEO keyword scraper to scrape these fields:

python```python
import httpx
from parsel import Selector
import json
from urllib.parse import quote
from collections import defaultdict

client = httpx.Client(
    headers={
        "Accept": "*/*",
        "Accept-Language": "en-US,en;q=0.9",
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36",
    },
)

def scrape_keyword_metadata(keyword: str, page: int):
    related_search = []
    # Create a dictionary container for the results
    results = defaultdict(list)

    url = f"https://www.google.com/search?hl=en&q={quote(keyword)}" + (
        f"&start={10*(page-1)}" if page > 1 else ""
    )
    request = client.get(url=url)
    selector = Selector(text=request.text)
    # Get all questions in a list
    people_ask_for = selector.xpath(
        ".//div[contains(@class, 'related-question-pair')]//text()"
    ).getall()
    # Iterate over keywords suggestions
    for suggestion in selector.xpath(
        "//div[div/div/span[contains(text(), 'Related searches')]]/following-sibling::div//a"
    ):
        # Extract all suggestions text and append to the list
        related_search.append("".join(suggestion.xpath(".//text()").getall()))

    # Add each field result to result dictionary
    results["people_ask_for"].extend(people_ask_for)
    results["related_search"].extend(related_search)

    # Print the result in a JSON format
    print(json.dumps(dict(results), indent=2))

scrape_keyword_metadata("Web Scraping using Typescript", 1)
```



Above, we use XPath selectors to extract text in the "people ask for" and "related search" sections. Then, we append the text we got to the results dictionary and print it in JSON format. Here is the result we got:

json```json
{
  "people_ask_for": [
    "Which language is best for web scraping?",
    "Can I web scrape with JavaScript?",
    "Do hackers use web scraping?",
    "Is web scraping AI legal?"
  ],
  "related_search": [
    "Web scraping using typescript python",
    "Web scraping using typescript node js",
    "Web scraping using typescript javascript",
    "Web scraping using typescript github",
    "typescript web scraper github",
    "web scraping in angular",
    "cheerio",
    "web scraping using node js"
  ]
}
```



We have seen how to scrape SEO keywords to scrape search ranks and optimize SEO. However, our SEO web scraping code isn't very scalable. And since Google is notorious for web scraping blocking, let's see how to scrape SEO keywords at scale next!



## Powering Up with ScrapFly

Web scraping Google isn't particularly difficult but to scale up such scrapers can be a challenge and this is where Scrapfly can lend a hand!



For example, to use our SEO web scraping code with [ScrapFly SDK](https://scrapfly.io/docs/sdk/python), we can easily scrape SEO keywords in a specific location without the risk of getting blocked:

python```python
from scrapfly import ScrapeConfig, ScrapflyClient
from urllib.parse import quote
from collections import defaultdict
import json

scrapfly = ScrapflyClient(key="Your API key")

keyword = "Web Scraping using Typescript"
page = 1

related_search = []
results = defaultdict(list)
url = f"https://www.google.com/search?hl=en&q={quote(keyword)}" + (
    f"&start={10*(page-1)}" if page > 1 else ""
)
result = scrapfly.scrape(
    scrape_config=ScrapeConfig(
        url=url,
        headers={
            "Accept": "*/*",
            "Accept-Language": "en-US,en;q=0.9",
            "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36",
        },
        # Enable the anti-scraping protection bypass
        asp=True,
        # Set proxies to a specfic country
        country="US",
    )
)

# Parse HTML using the pre-added selector
people_ask_for = result.selector.xpath(
    ".//div[contains(@class, 'related-question-pair')]//text()"
).getall()
for suggestion in result.selector.xpath(
    "//div[div/div/span[contains(text(), 'Related searches')]]/following-sibling::div//a"
):
    related_search.append("".join(suggestion.xpath(".//text()").getall()))

results["people_ask_for"].extend(people_ask_for)
results["related_search"].extend(related_search)

print(json.dumps(dict(results), indent=2))
```





## FAQ

What is web scraping for SEO?SEO web scraping means scraping Google search results, including search ranks and suggested searches and using this data for SEO optimization insights.







How to scrape Google search results?Google search results can be scraped using [Parsing HTML with Xpath](https://scrapfly.io/blog/posts/parsing-html-with-xpath#what-is-xpath) and [CSS selectors](https://scrapfly.io/blog/posts/parsing-html-with-css#what-are-css-selectors) like we did in the [SEO rankings scraper](#scrape-seo-keyword-rankings "go to SEO ranking scraper"). However, Google is known for its complicated HTML structure and blocking scrapers. For that, see our dedicated articles on [scraping google search](https://scrapfly.io/blog/posts/how-to-scrape-google) and [scraping without getting blocked](https://scrapfly.io/blog/posts/how-to-bypass-anti-bot-protection-when-web-scraping).







How can I scale Google SEO keyword scraping without getting blocked?Google aggressively blocks automated requests, so scaling SEO scrapers requires proxy rotation, realistic headers, and request throttling. Using [Scrapfly's web scraping API](https://scrapfly.io/web-scraping-api) with its [anti-scraping protection bypass](https://scrapfly.io/docs/scrape-api/anti-scraping-protection) lets you scrape Google search results at scale from any target country without worrying about blocking.

[**View Source Code**github.com/scrapfly/scrapfly-scrapers/tree/main/google-scraper](https://github.com/scrapfly/scrapfly-scrapers/tree/main/google-scraper)









## Scraping SEO Keywords - Summary

SEO keywords are search keywords passed by users to get search results. Search engines rank keywords based on several factors:

- Keywords relevancy.
- Content quality.
- Geographic distance.
- Responsive design.
- Domain authorization.
- Updated content.

We've explored how to apply web scraping for SEO use by scraping Google search ranks and suggested searches. For this, we wrote a small SEO scraper using Python, `httpx` and `parsel` with XPath.

Web scraping keywords allows for SEO optimization, leading to higher search ranking. That being said, scraping Google can have challenges like complex HTML parsing and scraper blocking.



 

    Table of Contents- [Key Takeaways](#key-takeaways)
- [What are SEO keywords?](#what-are-seo-keywords)
- [How SEO Affects Search Results?](#how-seo-affects-search-results)
- [Setup](#setup)
- [Scrape SEO Keyword Rankings](#scrape-seo-keyword-rankings)
- [Scrape SEO Keyword Suggestions](#scrape-seo-keyword-suggestions)
- [Powering Up with ScrapFly](#powering-up-with-scrapfly)
- [FAQ](#faq)
- [Scraping SEO Keywords - Summary](#scraping-seo-keywords-summary)
 
    Join the Newsletter  Get monthly web scraping insights 

 

  



Scale Your Web Scraping

Anti-bot bypass, browser rendering, and rotating proxies, all in one API. Start with 1,000 free credits.

  No credit card required  1,000 free API credits  Anti-bot bypass included 

 [Start Free](https://scrapfly.io/register) [View Docs](https://scrapfly.io/docs/onboarding) 

 Not ready? Get our newsletter instead. 

 

## Explore this Article with AI

 [ ChatGPT ](https://chat.openai.com/?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fweb-scraping-google-seo-keywords) [ Gemini ](https://www.google.com/search?udm=50&aep=11&q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fweb-scraping-google-seo-keywords) [ Grok ](https://x.com/i/grok?text=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fweb-scraping-google-seo-keywords) [ Perplexity ](https://www.perplexity.ai/search/new?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fweb-scraping-google-seo-keywords) [ Claude ](https://claude.ai/new?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fweb-scraping-google-seo-keywords) 



 ## Related Articles

 [  

 python scrapeguide 

### How to Scrape Google Search Results in 2026

In this scrape guide we'll be taking a look at how to scrape Google Search - the biggest index of public web. We'll cov...

 

 ](https://scrapfly.io/blog/posts/how-to-scrape-google) [  

 python playwright 

### How to Scrape Google Maps

We'll take a look at to find businesses through Google Maps search system and how to scrape their details using either S...

 

 ](https://scrapfly.io/blog/posts/how-to-scrape-google-maps) [  

 python seo 

### How to Scrape Google Trends using Python

In this article we'll be taking a look at scraping Google Trends - what it is and how to scrape it? For this example, we...

 

 ](https://scrapfly.io/blog/posts/how-to-scrape-google-trends) 

  



   



 Scale your web scraping effortlessly, **1,000 free credits** [Start Free](https://scrapfly.io/register)