How to save and load cookies in Playwright?

When web scraping, we might want to pause our scraping session by saving cookies and resume it later. Using Playwright, to save and load cookies we need to refer to the context object which has methods cookies() and add_cookies():

import json
from pathlib import Path

from playwright.sync_api import sync_playwright

with sync_playwright() as pw:
    browser = pw.chromium.launch(headless=False)

    # To save cookies to a file first extract them from the browser context:
    context = browser.new_context(viewport={"width": 1920, "height": 1080})
    page = context.new_page()
    cookies = context.cookies()

    # Then, we can restore cookies from file:
    context = browser.new_context(viewport={"width": 1920, "height": 1080})
    page = context.new_page()
    print(context.cookies())  # we can test whether they were set correctly
    # will print:
            "sameSite": "Lax",
            "name": "mycookie",
            "value": "myvalue",
            "domain": "",
            "path": "/",
            "expires": -1,
            "httpOnly": False,
            "secure": False,
Question tagged: Playwright

Related Posts

How to Scrape With Headless Firefox

Discover how to use headless Firefox with Selenium, Playwright, and Puppeteer for web scraping, including practical examples for each library.

Web Scraping Dynamic Websites With Scrapy Playwright

Learn about Selenium Playwright. A Scrapy integration that allows web scraping dynamic web pages with Scrapy. We'll explain web scraping with Scrapy Playwright through an example project and how to use it for common scraping use cases, such as clicking elements, scrolling and waiting for elements.

How to Use Chrome Extensions with Playwright, Puppeteer and Selenium

In this article, we'll explore different useful Chrome extensions for web scraping. We'll also explain how to install Chrome extensions with various headless browser libraries, such as Selenium, Playwright and Puppeteer.