How to wait for page to load in Selenium?

When scraping dynamic web pages with Selenium we need to wait for the page to fully load before we retrieve the page source. Using Selenium WebDriverWait function we can wait for a specific element to appear on the page which indicates that the web page has fully loaded and then grab the page source:

from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions
from selenium.webdriver.common.by import By

driver = webdriver.Chrome()
driver.get("https://httpbin.dev/")
_timeout = 10  # ⚠ don't forget to set a reasonable timeout
WebDriverWait(driver, _timeout).until(
    expected_conditions.presence_of_element_located(
        # we can wait by any selector type like element id:
        (By.ID, "operations-tag-Auth")
        # or by class name
        # (By.CLASS_NAME, ".price")
        # or by xpath
        # (By.XPATH, "//h1[@class='price']")
        # or by CSS selector
        # (By.CSS_SELECTOR, "h1.price")
    )
)
print(driver.page_source)
Question tagged: Selenium, Data Parsing, Headless Browsers

Related Posts

How to Use Chrome Extensions with Playwright, Puppeteer and Selenium

In this article, we'll explore different useful Chrome extensions for web scraping. We'll also explain how to install Chrome extensions with various headless browser libraries, such as Selenium, Playwright and Puppeteer.

Intro to Web Scraping using Selenium Grid

In this guide, you will learn about installing and configuring Selenium Grid with Docker and how to use it for web scraping at scale.

How to Scrape Google Maps

We'll take a look at to find businesses through Google Maps search system and how to scrape their details using either Selenium, Playwright or ScrapFly's javascript rendering feature - all of that in Python.