How to Scrape Indeed.com (2025 Update)

How to Scrape Indeed.com (2025 Update)

In this web scraping tutorial, we'll take a look at how to scrape job listing data from Indeed.com. A one of the most popular job listing websites, and it's straightforward to scrape!

In this tutorial, we'll build our scraper with just a few lines of Python code. We'll take a look at how Indeed's search works to replicate it in our scraper and extract job data from embedded javascript variables. Let's dive in!

Latest Indeed.com Scraper Code

https://github.com/scrapfly/scrapfly-scrapers/

Why Scrape Indeed.com?

The job market is dynamic, featuring new opportunities and updates daily. Scraping Indeed allows for receiving real-time updates for different job postings in different industries and locations.

Scraping Indeed also allows for studying the job market trends. This is achieved by aggregating the job data to identify patterns, in-demand skills and job requirements.

Moreover, manually exploring thousands of job listings on the website can be time-consuming. However, we can quickly scrape Indeed for these listings or set up custom job posting alerts.

Project Setup

For this web scraper, we'll only need an HTTP client library such as httpx, which can be installed through pip console command:

$ pip install httpx

There are many HTTP clients in Python like requests, httpx, aiohttp, etc. However, we recommend httpx as it's the least one likely to be blocked, as it supports http2 protocol. Moreover, httpx also allows us to execute our web scraping code asynchronously, increasing our web scraping speed.

For ScrapFly users, we'll also be providing code versions using scrapfly-sdk.

Web Scraping with Python

Introduction tutorial to web scraping with Python. How to collect and parse public data. Challenges, best practices and an example project.

Web Scraping with Python

Finding Indeed Jobs

To start, let's take a look at how we can find job listings on Indeed.com.
Go to the website homepage, submit a search, and you will be redirected to a search URL with a few key parameters:

https://www.indeed.com/jobs?q=python&l=Texas

So, for example, to find Python jobs in Texas, all we have to do is send a request with l=Texas and q=Python URL parameters:

Python
ScrapFly
import httpx
HEADERS = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36",
    "Accept-Encoding": "gzip, deflate, br",
    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
    "Connection": "keep-alive",
    "Accept-Language": "en-US,en;q=0.9,lt;q=0.8,et;q=0.7,de;q=0.6",
}

response = httpx.get("https://www.indeed.com/jobs?q=python&l=Texas", headers=HEADERS)
print(response)
from scrapfly import ScrapflyClient, ScrapeConfig

scrapfly = ScrapflyClient(key="YOUR SCRAPFLY KEY")
result = scrapfly.scrape(ScrapeConfig(
    url="https://www.indeed.com/jobs?q=python&l=Texas",
    asp=True,
))
print(result.selector.xpath('//h1').get())

Note: if you receive response status code 403 here, it's likely you are being blocked. Run the ScrapFly code tabs to avoid blocking.

We got a single page that contains 15 job listings! Before we collect the remaining pages, let's see how we can parse job listing data from this response.

We could parse the HTML document using CSS or XPath selectors, but there's an easier way: we can find all of the job listing data hidden away deep in the HTML as a JSON document:

page source of indeed.com search page embedded ddata

This type of data is commonly known as hidden web data. It is the same data present on the web page but before it gets rendered in HTML.

So, let's parse this data using a simple regular expression pattern:

Python
ScrapFly
import httpx
import re
import json

HEADERS = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36",
    "Accept-Encoding": "gzip, deflate, br",
    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
    "Connection": "keep-alive",
    "Accept-Language": "en-US,en;q=0.9,lt;q=0.8,et;q=0.7,de;q=0.6",
}


def parse_search_page(html: str):
    data = re.findall(r'window.mosaic.providerData\["mosaic-provider-jobcards"\]=(\{.+?\});', html)
    data = json.loads(data[0])
    return {
        "results": data["metaData"]["mosaicProviderJobCardsModel"]["results"],
        "meta": data["metaData"]["mosaicProviderJobCardsModel"]["tierSummaries"],
    }


response = httpx.get("https://www.indeed.com/jobs?q=python&l=Texas", headers=HEADERS)
print(parse_search_page(response.text))
import re
import json
from scrapfly import ScrapflyClient, ScrapeConfig

scrapfly = ScrapflyClient(key="YOUR SCRAPFLY KEY")


def parse_search_page(html: str):
    data = re.findall(r'window.mosaic.providerData\["mosaic-provider-jobcards"\]=(\{.+?\});', html)
    data = json.loads(data[0])
    return {
        "results": data["metaData"]["mosaicProviderJobCardsModel"]["results"],
        "meta": data["metaData"]["mosaicProviderJobCardsModel"]["tierSummaries"],
    }


result = scrapfly.scrape(
    ScrapeConfig(
        url="https://www.indeed.com/jobs?q=python&l=Texas",
        asp=True,
    )
)
print(parse_search_page(result.content))

In our code above, we are using a regular expression pattern to select mosaic-provider-jobcards variable value, load it as a python dictionary and parse out the result and paging meta-data.

Now that we have the first page results and total page count, we can retrieve the remaining pages:

Python
ScrapFly
import asyncio
import httpx
import json
import re
from urllib.parse import urlencode


def parse_search_page(html: str):
    data = re.findall(r'window.mosaic.providerData\["mosaic-provider-jobcards"\]=(\{.+?\});', html)
    data = json.loads(data[0])
    return {
        "results": data["metaData"]["mosaicProviderJobCardsModel"]["results"],
        "meta": data["metaData"]["mosaicProviderJobCardsModel"]["tierSummaries"],
    }


async def scrape_search(client: httpx.AsyncClient, query: str, location: str, max_results: int = 50):
    def make_page_url(offset):
        parameters = {"q": query, "l": location, "filter": 0, "start": offset}
        return "https://www.indeed.com/jobs?" + urlencode(parameters)

    print(f"scraping first page of search: {query=}, {location=}")
    response_first_page = await client.get(make_page_url(0))
    data_first_page = parse_search_page(response_first_page.text)

    results = data_first_page["results"]
    total_results = sum(category["jobCount"] for category in data_first_page["meta"])
    # there's a page limit on indeed.com of 1000 results per search
    if total_results > max_results:
        total_results = max_results
    print(f"scraping remaining {total_results - 10 / 10} pages")
    other_pages = [make_page_url(offset) for offset in range(10, total_results + 10, 10)]
    for response in await asyncio.gather(*[client.get(url=url) for url in other_pages]):
        results.extend(parse_search_page(response.text))
    return results
import json
import re
from urllib.parse import urlencode
from scrapfly import ScrapflyClient, ScrapeConfig

scrapfly = ScrapflyClient(key="YOUR SCRAPFLY KEY")


def parse_search_page(html: str):
    data = re.findall(r'window.mosaic.providerData\["mosaic-provider-jobcards"\]=(\{.+?\});', html)
    data = json.loads(data[0])
    return {
        "results": data["metaData"]["mosaicProviderJobCardsModel"]["results"],
        "meta": data["metaData"]["mosaicProviderJobCardsModel"]["tierSummaries"],
    }


async def scrape_search(query: str, location: str, max_results: int = 50):
    def make_page_url(offset):
        parameters = {"q": query, "l": location, "filter": 0, "start": offset}
        return "https://www.indeed.com/jobs?" + urlencode(parameters)

    print(f"scraping first page of search: {query=}, {location=}")
    result_first_page = await scrapfly.async_scrape(ScrapeConfig(make_page_url(0), asp=True))
    data_first_page = parse_search_page(result_first_page.content)

    results = data_first_page["results"]
    total_results = sum(category["jobCount"] for category in data_first_page["meta"])
    # there's a page limit on indeed.com of 1000 results per search
    if total_results > max_results:
        total_results = max_results
    print(f"scraping remaining {total_results - 10 / 10} pages")
    other_pages = [
        ScrapeConfig(make_page_url(offset), asp=True) 
        for offset in range(10, total_results + 10, 10)
    ]
    async for result in scrapfly.concurrent_scrape(other_pages):
        results.extend(parse_search_page(result.content))
    return results

# example run
import asyncio
asyncio.run(scrape_search(query="python", location="texas"))
Run Code & Example Output
async def main():
    # we need to use browser-like headers to avoid being blocked instantly:
    HEADERS = {
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36",
        "Accept-Encoding": "gzip, deflate, br",
        "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
        "Accept-Language": "en-US,en;q=0.9,lt;q=0.8,et;q=0.7,de;q=0.6",
    }
    async with httpx.AsyncClient(headers=HEADERS) as client:
        search_data = await scrape_search(client, query="python", location="texas")
        print(json.dumps(search_data, indent=2))

asyncio.run(main())

The search result data is similar to the following:

[
    {
        "company": "Apple",
        "companyBrandingAttributes": {
            "headerImageUrl": "https://d2q79iu7y748jz.cloudfront.net/s/_headerimage/1960x400/ecdb4796986d27b654fe959e2fdac201",
            "logoUrl": "https://d2q79iu7y748jz.cloudfront.net/s/_squarelogo/256x256/86583e966849b2f081928769a6abdb09"
        },
        "companyIdEncrypted": "c1099851e9794854",
        "companyOverviewLink": "/cmp/Apple",
        "companyOverviewLinkCampaignId": "serp-linkcompanyname",
        "companyRating": 4.1,
        "companyReviewCount": 11193,
        "companyReviewLink": "/cmp/Apple/reviews",
        "companyReviewLinkCampaignId": "cmplinktst2",
        "displayTitle": "Software Quality Engineer, Apple Pay",
        "employerAssistEnabled": false,
        "employerResponsive": false,
        "encryptedFccompanyId": "6e7b40121fbb5e2f",
        "encryptedResultData": "VwIPTVJ1cTn5AN7Q-tSqGRXGNe2wB2UYx73qSczFnGU",
        "expired": false,
        "extractTrackingUrls": "https://jsv3.recruitics.com/partner/a51b8de1-f7bf-11e7-9edd-d951492604d9.gif?client=3427&rx_c=&rx_campaign=indeed16&rx_group=130795&rx_source=Indeed&job=200336736-2&rx_r=none&rx_ts=20220831T001748Z&rx_pre=1&indeed=sp",
        "extractedEntities": [],
        "fccompanyId": -1,
        "featuredCompanyAttributes": {},
        "featuredEmployer": false,
        "featuredEmployerCandidate": false,
        "feedId": 2772,
        "formattedLocation": "Austin, TX",
        "formattedRelativeTime": "Today",
        "hideMetaData": false,
        "hideSave": false,
        "highVolumeHiringModel": {
            "highVolumeHiring": false
        },
        "highlyRatedEmployer": false,
        "hiringEventJob": false,
        "indeedApplyEnabled": false,
        "indeedApplyable": false,
        "isJobSpotterJob": false,
        "isJobVisited": false,
        "isMobileThirdPartyApplyable": true,
        "isNoResumeJob": false,
        "isSubsidiaryJob": false,
        "jobCardRequirementsModel": {
            "additionalRequirementsCount": 0,
            "requirementsHeaderShown": false
        },
        "jobLocationCity": "Austin",
        "jobLocationState": "TX",
        "jobTypes": [],
        "jobkey": "5b47456ae8554711",
        "jsiEnabled": false,
        "locationCount": 0,
        "mobtk": "1gbpe4pcikib6800",
        "moreLocUrl": "",
        "newJob": true,
        "normTitle": "Software Quality Engineer",
        "openInterviewsInterviewsOnTheSpot": false,
        "openInterviewsJob": false,
        "openInterviewsOffersOnTheSpot": false,
        "openInterviewsPhoneJob": false,
        "overrideIndeedApplyText": true,
        "preciseLocationModel": {
            "obfuscateLocation": false,
            "overrideJCMPreciseLocationModel": true
        },
        "pubDate": 1661835600000,
        "redirectToThirdPartySite": false,
        "remoteLocation": false,
        "resumeMatch": false,
        "salarySnippet": {
            "salaryTextFormatted": false
        },
        "saved": false,
        "savedApplication": false,
        "showCommutePromo": false,
        "showEarlyApply": false,
        "showJobType": false,
        "showRelativeDate": true,
        "showSponsoredLabel": false,
        "showStrongerAppliedLabel": false,
        "smartFillEnabled": false,
        "snippet": "<ul style=\"list-style-type:circle;margin-top: 0px;margin-bottom: 0px;padding-left:20px;\"> \n <li style=\"margin-bottom:0px;\">At Apple, new ideas become extraordinary products, services, and customer experiences.</li>\n <li>We have the rare and rewarding opportunity to shape upcoming products\u2026</li>\n</ul>",
        "sourceId": 2700,
        "sponsored": true,
        "taxoAttributes": [],
        "taxoAttributesDisplayLimit": 5,
        "taxoLogAttributes": [],
        "taxonomyAttributes": [ { "attributes": [], "label": "job-types" }, "..."],
        "tier": {
            "matchedPreferences": {
                "longMatchedPreferences": [],
                "stringMatchedPreferences": []
            },
            "type": "DEFAULT"
        },
        "title": "Software Quality Engineer, Apple Pay",
        "translatedAttributes": [],
        "translatedCmiJobTags": [],
        "truncatedCompany": "Apple",
        "urgentlyHiring": false,
        "viewJobLink": "...",
        "vjFeaturedEmployerCandidate": false
    },
]

We've successfully scraped mountains of data with very few lines of Python code! Let's learn how to scrape job pages to obtain the remaining details of a job listing, such as the full description.

Scraping Indeed Jobs

Our search results contain almost all job listing data except a few details, such as a complete job description. To scrape this, we need the job id, which is found in the jobkey field in our search results:

{
  "jobkey": "a82cf0bd2092efa3",
}

Using jobkey we can request the full job details page, and just like with the search, we can parse the hidden data instead of the HTML:

page source of indeed.com search page embedded data

We can see that all of the job and page information is hidden in the _initialData variable. It can be extracted with a simple regular expression pattern:

Python
ScrapFly
import re
import json
import httpx
import asyncio
from typing import List


def parse_job_page(html):
    """parse job data from job listing page"""
    data = re.findall(r"_initialData=(\{.+?\});", html)
    data = json.loads(data[0])
    return data["jobInfoWrapperModel"]["jobInfoModel"]


async def scrape_jobs(client: httpx.AsyncClient, job_keys: List[str]):
    """scrape job details from job page for given job keys"""
    urls = [f"https://www.indeed.com/m/basecamp/viewjob?viewtype=embedded&jk={job_key}" for job_key in job_keys]
    scraped = []
    for response in await asyncio.gather(*[client.get(url=url) for url in urls]):
        scraped.append(parse_job_page(response.text))
    return scraped
import re
import json
from typing import List
from scrapfly import ScrapeConfig, ScrapflyClient

scrapfly = ScrapflyClient(key="YOUR SCRAPFLY KEY")


def parse_job_page(html):
    """parse job data from job listing page"""
    data = re.findall(r"_initialData=(\{.+?\});", html)
    data = json.loads(data[0])
    return data["jobInfoWrapperModel"]["jobInfoModel"]


async def scrape_jobs(job_keys: List[str]):
    """scrape job details from job page for given job keys"""
    urls = [f"https://www.indeed.com/m/basecamp/viewjob?viewtype=embedded&jk={job_key}" for job_key in job_keys]
    to_scrape = [ScrapeConfig(url=url, asp=True) for url in urls]
    scraped = []
    async for result in scrapfly.concurrent_scrape(to_scrape):
        scraped.append(parse_job_page(result.content))
    return scraped
Run Code & Example Output
async def main():
    HEADERS = {
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36",
        "Accept-Encoding": "gzip, deflate, br",
        "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
        "Connection": "keep-alive",
        "Accept-Language": "en-US,en;q=0.9,lt;q=0.8,et;q=0.7,de;q=0.6",
    }
    async with httpx.AsyncClient(headers=HEADERS) as client:
        job_data = await scrape_jobs(client, ["9100493864fe1d6e", "5361f22542fe4a95"])
        print(job_data[0]['sanitizedJobDescription']['content'])
        print(job_data)

asyncio.run(main())

This will scrape results similar to:

[
    {
        "jobInfoHeaderModel": {
            "...",
            "companyName": "ExxonMobil",
            "companyOverviewLink": "https://www.indeed.com/cmp/Exxonmobil?campaignid=mobvjcmp&from=mobviewjob&tk=1gbpekba3is92800&fromjk=9dacdef3068a1d25",
            "companyReviewLink": "https://www.indeed.com/cmp/Exxonmobil/reviews?campaignid=mobvjcmp&cmpratingc=mobviewjob&from=mobviewjob&tk=1gbpekba3is92800&fromjk=9dacdef3068a1d25&jt=Geoscience+Technician",
            "companyReviewModel": {
                "companyName": "ExxonMobil",
                "desktopCompanyLink": "https://www.indeed.com/cmp/Exxonmobil/reviews?campaignid=viewjob&cmpratingc=mobviewjob&from=viewjob&tk=1gbpekba3is92800&fromjk=9dacdef3068a1d25&jt=Geoscience+Technician",
                "mobileCompanyLink": "https://www.indeed.com/cmp/Exxonmobil/reviews?campaignid=mobvjcmp&cmpratingc=mobviewjob&from=mobviewjob&tk=1gbpekba3is92800&fromjk=9dacdef3068a1d25&jt=Geoscience+Technician",
                "ratingsModel": {
                    "ariaContent": "3.9 out of 5 stars from 4,649 employee ratings",
                    "count": 4649,
                    "countContent": "4,649 reviews",
                    "descriptionContent": "Read what people are saying about working here.",
                    "rating": 3.9,
                    "showCount": true,
                    "showDescription": true,
                    "size": null
                }
            },
            "disableAcmeLink": false,
            "employerActivity": null,
            "employerResponsiveCardModel": null,
            "formattedLocation": "Spring, TX 77389",
            "hideRating": false,
            "isDesktopApplyButtonSticky": false,
            "isSimplifiedHeader": false,
            "jobTitle": "Geoscience Technician",
            "openCompanyLinksInNewTab": false,
            "parentCompanyName": null,
            "preciseLocationModel": null,
            "ratingsModel": null,
            "remoteWorkModel": null,
            "subtitle": "ExxonMobil - Spring, TX 77389",
            "tagModels": null,
            "viewJobDisplay": "DESKTOP_EMBEDDED"
        },
        "sanitizedJobDescription": {
            "content": "<p></p>\n<div>\n <div>\n  <div>\n   <div>\n    <h2 class='\"jobSectionHeader\"'><b>Education and Related Experience</b></h2>\n   </div>\n   <div>\n  ...",
            "contentKind": "HTML"
        },
        "viewJobDisplay": "DESKTOP_EMBEDDED"
    }
]

We should see the full job description printed out if we run this scraper.


With this last feature, our indeed scraper is ready to go! However, our scraper is very likely to get blocked while running it at scale. For that, let's take a look at how we can integrate ScrapFly to avoid being blocked.

Bypass Indeed Blocking with ScrapFly

Indeed.com is using anti-scraping protection to block web scraper traffic. To get around this, we can use ScrapFly web scraping API which will help you scale up!

scrapfly middleware

ScrapFly provides web scraping, screenshot, and extraction APIs for data collection at scale.

For our Indeed scraper, we'll be using the Anti Scraping Protection Bypass feature via Scrapfly-sdk, which can be installed using pip console command:

$ pip install scrapfly-sdk

Now, we can enable the Anti Scraping Protection bypass via the asp=True flag:

from scrapfly import ScrapflyClient, ScrapeConfig

client = ScrapflyClient(key="YOUR_API_KEY")
result = client.scrape(ScrapeConfig(
    url="https://www.indeed.com/jobs?q=python&l=Texas",
    asp=True,
    # ^ enable Anti Scraping Protection
))

html = result.content  # get the page HTML
selector = result.selector # use the built-in parsel selector

FAQ

Yes. The job data on Indeed.com is publicly available so it's perfectly legal to scrape. Note that some of the scraped material can be protected by copyright, such as images.

Can Indeed be scraped using headless browsers such as Playwright?

Yes, but as covered in this article it's not necessary. Indeed pages are powered by a JSON API which can be scraped directly. This reduces the resource requirement for both the scraper and Indeed.com public data servers.

Is there a public API for Indeed.com?

No. As of the time of writing, there is no public API for Indeed.com job data. However, as indicated by this article Indeed.com can be easily scraped using Python!

Latest Indeed.com Scraper Code
https://github.com/scrapfly/scrapfly-scrapers/

Indeed Scraping Summary

In this short web scraping tutorial, we've looked at web scraping Indeed.com job listing search.
We built a search URL using custom search parameters and parsed job data from the embedded JSON data by using regular expressions. As a bonus, we also looked at scraping full job listing descriptions and how to avoid blocking using the Scrapfly SDK.

Related Knowledgebase

Related Articles

How to Scrape YouTube in 2025

Learn how to scrape YouTube, channel, video, and comment data using Python directly in JSON.

SCRAPEGUIDE
PYTHON
HIDDEN-API
How to Scrape YouTube in 2025

How to Scrape Reddit Posts, Subreddits and Profiles

In this article, we'll explore how to scrape Reddit. We'll extract various social data types from subreddits, posts, and user pages. All of which through plain HTTP requests without headless browser usage.

PYTHON
SCRAPEGUIDE
How to Scrape Reddit Posts, Subreddits and Profiles

How to Scrape LinkedIn in 2025

In this scrape guide we'll be taking a look at one of the most popular web scraping targets - LinkedIn.com. We'll be scraping people profiles, company profiles as well as job listings and search.

PYTHON
SCRAPEGUIDE
How to Scrape LinkedIn in 2025

How to Scrape SimilarWeb Website Traffic Analytics

In this guide, we'll explain how to scrape SimilarWeb through a step-by-step guide. We'll scrape comprehensive website traffic insights, websites comparing data, sitemaps, and trending industry domains.

SCRAPEGUIDE
SEO
SEARCH-ENGINE
PYTHON
How to Scrape SimilarWeb Website Traffic Analytics

How to Scrape BestBuy Product, Offer and Review Data

Learn how to scrape BestBuy, one of the most popular retail stores for electronic stores in the United States. We'll scrape different data types from product, search, review, and sitemap pages using different web scraping techniques.

SCRAPEGUIDE
HIDDEN-API
ECOMMERCE
PYTHON
How to Scrape BestBuy Product, Offer and Review Data

How To Scrape TikTok in 2025

In this tutorial, we'll explain how to scrape TikTok. We'll extract data from various TikTok sources, such as posts, comments, profiles and search pages. Moreover, we'll scrape these data through hidden TikTok APIs or hidden JSON datasets.

PYTHON
HIDDEN-API
SCRAPEGUIDE
How To Scrape TikTok in 2025