How to Scrape Google Trends using Python

article feature image

Google trends is a popular tool for understanding current web trends. It shows the current popular search topics and provides keyword analysis and insights. Needless to say, this is a valuable data source for marketers and SEO experts.

In this article, we'll overview Google trends scraping. What makes it such a popular target in web scraping and how to scrape this data using nothing but a bit of free Python code.

For this article, we'll be using Python and we'll be exploring direct Google Trends API scraping so we'll cover a bit of reverse engineering using Browser Developer Tools. Let's dive in!

Google Trends is a free service built by Google that analyzes all search queries going through Google's search engine. It's a powerful tool for understanding world markets and it provides the following powerful features:

  • Comprehensive analysis of search keyword use with explaining graphs.
  • Search volumes, related queries and topics on search keywords.
  • Result grouping based on a specific timeframe and geographic location.
  • Real-time and historical searching for trending search topics.

Google Trends allows us to understand the current search patterns and trends which helps with market research, decision-making and ranking higher on the search pages.

In this guide, we'll scrape Google Trends to get keyword insights and trending data. But before that, let's look at the tools we'll use.


Since we'll use Google Trends API to scrape data directly, we won't use any parsing libraries. Instead, we'll only use httpx for sending requests and pandas for saving the data as CSV. These libraries can be installed using this pip terminal command:

pip install httpx pandas

We'll scrape Google Trends directly from their secret backend API. However, it's possible to scrape Google Trends using the traditional HTML scraping approach. For similar examples, see our previous articles on scraping Google search and scraping Google SEO keywords.

🙋‍ Note that Google hasn't provided a public API for Google Trends yet. This is a private API the website uses to get the data in JSON and render it into the HTML page.

The Google Trends website consists of two main sections:

  • Explore: A tool used to analyze and search for keywords and queries.
  • Trending now: A page that includes data about the current popular search topics.

We'll scrape both sections into JSON and CSV using Python and the httpx library. Let's start with the Explore section.

The Explore section provides trending search keywords and queries while also allowing for exploring keyword statistics and insights.

To scrape this tool from the backend API, we need to find the API responsible for fetching the JSON data. We can see this by opening up Browser Developer Tools using the F12 key while browsing any Google Trends page.
Then, head over to the network tab and select the Fetch/XHR tab which tracks all data background requests. If we reload the page this view should display all the API requests the browser sent to the server:

Inspect the Google Trends keywords analysis page

The above page represents the API requests the browser sent to the server while reloading the page. These API requests often include the JSON data we want to scrape.

We'll search for the requests that represent the related queries and topics sections. Once we find these requests, right-click on each of them and select "copy link address":

How to the Google Trends API from the browser developer tools

We'll use these links with Python's httpx to send requests and get the data directly as a JSON dataset:

import httpx
import json
import pandas as pd

# Set the geographical location to the United States
geo_location = "US"

# Add the API URLs
queries_url = f"{geo_location}&tz=-180&req=%7B%22restriction%22:%7B%22geo%22:%7B%22country%22:%22US%22%7D,%22time%22:%222022-09-22+2023-09-22%22,%22originalTimeRangeForExploreUrl%22:%22today+12-m%22,%22complexKeywordsRestriction%22:%7B%22keyword%22:%5B%7B%22type%22:%22BROAD%22,%22value%22:%22Stocks%22%7D%5D%7D%7D,%22keywordType%22:%22QUERY%22,%22metric%22:%5B%22TOP%22,%22RISING%22%5D,%22trendinessSettings%22:%7B%22compareTime%22:%222021-09-21+2022-09-21%22%7D,%22requestOptions%22:%7B%22property%22:%22%22,%22backend%22:%22IZG%22,%22category%22:0%7D,%22language%22:%22en%22,%22userCountryCode%22:%22EG%22,%22userConfig%22:%7B%22userType%22:%22USER_TYPE_LEGIT_USER%22%7D%7D&token=APP6_UEAAAAAZQ74BjkfhNeif16RtzujoCo4WDMvTJrM"
topics_url = f"{geo_location}&tz=-180&req=%7B%22restriction%22:%7B%22geo%22:%7B%22country%22:%22US%22%7D,%22time%22:%222022-09-22+2023-09-22%22,%22originalTimeRangeForExploreUrl%22:%22today+12-m%22,%22complexKeywordsRestriction%22:%7B%22keyword%22:%5B%7B%22type%22:%22BROAD%22,%22value%22:%22Stocks%22%7D%5D%7D%7D,%22keywordType%22:%22ENTITY%22,%22metric%22:%5B%22TOP%22,%22RISING%22%5D,%22trendinessSettings%22:%7B%22compareTime%22:%222021-09-21+2022-09-21%22%7D,%22requestOptions%22:%7B%22property%22:%22%22,%22backend%22:%22IZG%22,%22category%22:0%7D,%22language%22:%22en%22,%22userCountryCode%22:%22EG%22,%22userConfig%22:%7B%22userType%22:%22USER_TYPE_LEGIT_USER%22%7D%7D&token=APP6_UEAAAAAZQ74BlNutPu6eM-2GC3K6RzCWCS0_H5I"

# Get the data from the API URLs
topics_response = httpx.get(url=topics_url)
queries_response = httpx.get(url=queries_url)

# Remove the extra symbols and add the data into JSON objects
topics_data = json.loads(topics_response.text.replace(")]}',", ""))
queries_data = json.loads(queries_response.text.replace(")]}',", ""))

result = []

# Prase the topics data and the data into the result list
for topic in topics_data["default"]["rankedList"][1]["rankedKeyword"]:
    topic_object = {
        "Title": topic["topic"]["title"],
        "Search Volume": topic["value"],
        "Link": "" + topic["link"],
        "Geo Location": geo_location,
        "Type": "search_topic",

# Prase the querires data and the data into the result list
for query in queries_data["default"]["rankedList"][1]["rankedKeyword"]:
    query_object = {
        "Title": query["query"],
        "Search Volume": query["value"],
        "Link": "" + query["link"],
        "Geo Location": geo_location,
        "Type": "search_query",


# Create a Pandas dataframe and save the data into CSV
df = pd.DataFrame(result)
df.to_csv("keywords.csv", index=False)

Here, we create a geo_location variable to set the web scraping locality to the US and encode it to the API URLs we got earlier. Then, we use httpx to send requests to get the JSON data and add it to a JSON object.

Next, we search for the data in the JSON using dictionary indexing and append the result to the result list. Finally, we print the result and save it to a CSV file using pandas. Here is the result we got:

Google Trends scraper result

Cool! We scraped Google Trends keywords in JSON and CSV without parsing the HTML. Let's do the same for the trending topics section.

Another important section of the Google Trends website is the Trending Now. Which represents the popular topics that users search for:

Google Trends trending searches page

First, we'll start by getting the API request responsible for fetching this data. Open the developer tools and select the network tab and instead of reloading the page, scroll down to load more data. This will allow us to get the API responsible for getting historical trends data:

How to get the Google Trends API from the browser developer tools

This API uses a numeric date parameter get trending search topics for a specific day. We'll use it to scrape Google Trends on specific days:

import httpx
import json
import pandas as pd

result = []
geo_location = "US"

# Decrement the date parameter to get trends data of previous days
for day in range(20230921, 20230919, -1):
    url = f"{geo_location}&tz=-180&ed={day}&geo=US&hl=en-US&ns=15"

    response = httpx.get(url=url)
    data = json.loads(response.text.replace(")]}',", ""))
    # Extract the formatted date from the JSON data
    date = data["default"]["trendingSearchesDays"][0]["formattedDate"]

    for trend in data["default"]["trendingSearchesDays"][0]["trendingSearches"]:
        trend_object = {
            "Title": trend["title"]["query"],
            "Traffic volume": trend["formattedTraffic"],
            "Link": "" + trend["title"]["exploreLink"],
            "Type": "Trend_topic",
            "Date": date,
            "Geo Location": geo_location


df = pd.DataFrame(result)
df.to_csv("trends.csv", index=False)

The above code is very similar to the Google Trends scraper we wrote earlier. We only added a loop that iterates over a set of days to get the trending search topics data of a specific day. Here is what we got:

Google Trends scraping result


To wrap up this guide, let's take a look at some frequently asked questions about scraping Google Trends.

No, Google Trends hasn't provided a public API yet. However, you can get the private API from the browser developer tools and get the data in JSON as demonstrated in this introduction.

Yes, web scraping publicly available Google Trends data is perfectly legal around the world as long as it's being scraped at respectful rates that don't cause any harm to the website.

Yes, by changing the date parameter in the Google Trends API, you can scrape trending data from a specific date.

Yes, you can scrape Google keyword suggestions to get related keywords, topics and queries about a specific search keyword.

In this article, we explained how to scrape Google Trends using Python. Google Trends is a valuable data tool that provides keyword analysis and trending search topics by analyzing the search queries on the search engine.

Although the Google Trends API isn't available yet. We can scrape Google Trends by sending requests and getting data in JSON using the private backend API. Which can be extracted from the browser developer tools.

Related Posts

How to Scrape Forms

Learn how to scrape forms through a step-by-step guide using HTTP clients and headless browsers.

How to Build a Minimum Advertised Price (MAP) Monitoring Tool

Learn what minimum advertised price monitoring is and how to apply its concept using Python web scraping.

How to Scrape Reddit Posts, Subreddits and Profiles

In this article, we'll explore how to scrape Reddit. We'll extract various social data types from subreddits, posts, and user pages. All of which through plain HTTP requests without headless browser usage.