How to save and load cookies in Puppeteer?

by scrapecrow Oct 28, 2022

When web scraping, we often need to save the connection state like browser cookies and resume it later. Using Puppeteer, to save and load cookies we can use page.cookies() and page.setCookie() methods:

const puppeteer = require('puppeteer');
const fs = require('fs').promises;

async function run() {
    const browser = await puppeteer.launch();
    const page = await browser.newPage();

    // get some cookies:
    await page.goto("https://httpbin.dev/cookies/set/mycookie/myvalue");
    // then we can save them as JSON file:
    const cookies = await page.cookies();
    await fs.writeFile('cookies.json', JSON.stringify(cookies));

    // then later, we load the cookies from file:
    const cookies = JSON.parse(await fs.readFile('./cookies.json'));
    await page.setCookie(...cookies);
    await page.goto("https://httpbin.dev/cookies");

    console.log(await page.content())
    browser.close();
}

run();

Related Articles

What is a Headless Browser? Top 5 Headless Browser Tools

Quick overview of new emerging tech of browser automation - what exactly are these tools and how are they used in web scraping?

HEADLESS-BROWSER
PLAYWRIGHT
SELENIUM
PUPPETEER
What is a Headless Browser? Top 5 Headless Browser Tools

How to Scrape With Headless Firefox

Discover how to use headless Firefox with Selenium, Playwright, and Puppeteer for web scraping, including practical examples for each library.

HEADLESS-BROWSER
PUPPETEER
SELENIUM
NODEJS
PLAYWRIGHT
PYTHON
How to Scrape With Headless Firefox

How to Web Scrape with Puppeteer and NodeJS in 2025

Introduction to using Puppeteer in Nodejs for web scraping dynamic web pages and web apps. Tips and tricks, best practices and example project.

PUPPETEER
FRAMEWORK
NODEJS
HEADLESS-BROWSER
DATA-PARSING
How to Web Scrape with Puppeteer and NodeJS in 2025

How to Scrape Dynamic Websites Using Headless Web Browsers

Introduction to using web automation tools such as Puppeteer, Playwright, Selenium and ScrapFly to render dynamic websites for web scraping

HEADLESS-BROWSER
PYTHON
SELENIUM
PUPPETEER
PLAYWRIGHT
How to Scrape Dynamic Websites Using Headless Web Browsers

Bypass Proxy Detection with Browser Fingerprint Impersonation

Stop proxy blocks with browser fingerprint impersonation using this guide for Playwright, Selenium, curl-impersonate & Scrapfly

PROXIES
SELENIUM
PLAYWRIGHT
PUPPETEER
BLOCKING
Bypass Proxy Detection with Browser Fingerprint Impersonation

Web Scraping with Playwright and JavaScript

Learn about Playwright - a browser automation toolkit for server side Javascript like NodeJS, Deno or Bun.

PLAYWRIGHT
HEADLESS-BROWSER
NODEJS
Web Scraping with Playwright and JavaScript