How to Scrape With Headless Firefox
Discover how to use headless Firefox with Selenium, Playwright, and Puppeteer for web scraping, including practical examples for each library.
When web scraping, we often need to save the connection state like browser cookies and resume it later. Using Puppeteer, to save and load cookies we can use page.cookies()
and page.setCookie()
methods:
const puppeteer = require('puppeteer');
const fs = require('fs').promises;
async function run() {
const browser = await puppeteer.launch();
const page = await browser.newPage();
// get some cookies:
await page.goto("https://httpbin.dev/cookies/set/mycookie/myvalue");
// then we can save them as JSON file:
const cookies = await page.cookies();
await fs.writeFile('cookies.json', JSON.stringify(cookies));
// then later, we load the cookies from file:
const cookies = JSON.parse(await fs.readFile('./cookies.json'));
await page.setCookie(...cookies);
await page.goto("https://httpbin.dev/cookies");
console.log(await page.content())
browser.close();
}
run();