How to click on cookie popups and modal alerts in Playwright?

Modal pop-us are most commonly encountered as cookie consent popups or request to login popups. They are created using custom javascript that hide the content on page load and show some sort of message like this one:

cookie consent popup on web-scraping.dev/login page

There are multiple ways to handle modal pop-ups:

  1. We can click on one of the values like "OK" or "Yes"
  2. We can delete the modal element from the DOM

For example, let's take a look at web-scraping.dev/login page which on page load throws a cookie pop-up:

from playwright.sync_api import sync_playwright, TimeoutError

with sync_playwright() as p:
    browser = p.chromium.launch(headless=False)
    page = browser.new_page()
    page.goto("https://web-scraping.dev/login")

    # Option #1 - use page.click() to click on the button
    try:
        page.click("#cookie-ok", timeout=2_000)
    except TimeoutError:
        print("no cookie popup")
    
    # Option #2 - delete the popup HTML
    #   remove pop up
    cookie_modal = page.query_selector("#cookieModal")
    if cookie_modal:
        cookie_modal.evaluate("el => el.remove()")
    #   remove grey backgdrop which covers the screen
    modal_backdrop = page.query_selector(".modal-backdrop")
    if modal_backdrop:
        modal_backdrop.evaluate("el => el.remove()")

Above, we explore two ways to handle modal pop-ups: clicking a button that would dismiss it and hard removing them from the DOM. Generally, the first approach is more reliable as the real button click can have functionality attached to it like setting a cookie so the pop-up doesn't appear again. For cases when it's a login requirement or advertisement the second approach is more suited.

Question tagged: Playwright

Related Posts

How to Scrape With Headless Firefox

Discover how to use headless Firefox with Selenium, Playwright, and Puppeteer for web scraping, including practical examples for each library.

Web Scraping Dynamic Websites With Scrapy Playwright

Learn about Selenium Playwright. A Scrapy integration that allows web scraping dynamic web pages with Scrapy. We'll explain web scraping with Scrapy Playwright through an example project and how to use it for common scraping use cases, such as clicking elements, scrolling and waiting for elements.

How to Use Chrome Extensions with Playwright, Puppeteer and Selenium

In this article, we'll explore different useful Chrome extensions for web scraping. We'll also explain how to install Chrome extensions with various headless browser libraries, such as Selenium, Playwright and Puppeteer.