How to take a screenshot with Playwright?

by scrapecrow Dec 05, 2022

When web scraping, we might want to collect page screenshots or peek into what our headless browsers are seeing for debugging. In Playwright a screenshot can be taken using the screenshot() method of page or

from pathlib import Path
from playwright.sync_api import sync_playwright

with sync_playwright() as pw:
    browser = pw.chromium.launch(headless=False)

    # To save cookies to a file first extract them from the browser context:
    context = browser.new_context(viewport={"width": 1920, "height": 1080})
    page = context.new_page()
    page.goto('https://httpbin.dev/html')
    image_bytes = page.screenshot(
        full_page=True,   # this will try to scroll to capture full page
        path='screenshot.png',  # this will save the screenshot directly to a file
        clip={"x": 0, "y": 0, "width": 100, "height": 100},  # this will clip the screenshot to a specific region
    )
    # or we can save it manually
    Path("screenshot.png").write_bytes(image_bytes)

    # we can also take a screenshot of an element
    element = page.locator('p')
    image_bytes = element.screenshot(path='screenshot.png')

⚠ Note that when scraping dynamic web pages, screenshots could be captured before the page is fully loaded. For more see How to wait for page to load in Playwright?

Related Articles

Playwright Examples for Web Scraping and Automation

Learn Playwright with Python and JavaScript examples for automating browsers like Chromium, WebKit, and Firefox.

PLAYWRIGHT
PYTHON
NODEJS
Playwright Examples for Web Scraping and Automation

How to Scrape With Headless Firefox

Discover how to use headless Firefox with Selenium, Playwright, and Puppeteer for web scraping, including practical examples for each library.

HEADLESS-BROWSER
PUPPETEER
SELENIUM
NODEJS
PLAYWRIGHT
PYTHON
How to Scrape With Headless Firefox

Web Scraping Dynamic Websites With Scrapy Playwright

Learn about Selenium Playwright. A Scrapy integration that allows web scraping dynamic web pages with Scrapy. We'll explain web scraping with Scrapy Playwright through an example project and how to use it for common scraping use cases, such as clicking elements, scrolling and waiting for elements.

PYTHON
PLAYWRIGHT
SCRAPY
HEADLESS-BROWSER
Web Scraping Dynamic Websites With Scrapy Playwright

How to use Headless Chrome Extensions for Web Scraping

In this article, we'll explore different useful Chrome extensions for web scraping. We'll also explain how to install Chrome extensions with various headless browser libraries, such as Selenium, Playwright and Puppeteer.

PYTHON
NODEJS
TOOLS
PLAYWRIGHT
PUPPETEER
SELENIUM
How to use Headless Chrome Extensions for Web Scraping

How to Scrape Google Maps

We'll take a look at to find businesses through Google Maps search system and how to scrape their details using either Selenium, Playwright or ScrapFly's javascript rendering feature - all of that in Python.

SCRAPEGUIDE
PYTHON
SELENIUM
PLAYWRIGHT
How to Scrape Google Maps

How to Scrape Dynamic Websites Using Headless Web Browsers

Introduction to using web automation tools such as Puppeteer, Playwright, Selenium and ScrapFly to render dynamic websites for web scraping

HEADLESS-BROWSER
PYTHON
SELENIUM
PUPPETEER
PLAYWRIGHT
How to Scrape Dynamic Websites Using Headless Web Browsers