Web Scraping With a Headless Browser: Puppeteer
Introduction to using Puppeteer in Nodejs for web scraping dynamic web pages and web apps. Tips and tricks, best practices and example project.
When web scraping, we often need to save the connection state like browser cookies and resume it later. Using Puppeteer, to save and load cookies we can use page.cookies()
and page.setCookie()
methods:
const puppeteer = require('puppeteer');
const fs = require('fs').promises;
async function run() {
const browser = await puppeteer.launch();
const page = await browser.newPage();
// get some cookies:
await page.goto("https://httpbin.dev/cookies/set/mycookie/myvalue");
// then we can save them as JSON file:
const cookies = await page.cookies();
await fs.writeFile('cookies.json', JSON.stringify(cookies));
// then later, we load the cookies from file:
const cookies = JSON.parse(await fs.readFile('./cookies.json'));
await page.setCookie(...cookies);
await page.goto("https://httpbin.dev/cookies");
console.log(await page.content())
browser.close();
}
run();