Web Scraping With a Headless Browser: Puppeteer
Introduction to using Puppeteer in Nodejs for web scraping dynamic web pages and web apps. Tips and tricks, best practices and example project.
CSS selectors are one of the most popular ways to parse HTML pages when web scraping. In NodeJS and Puppeteer, CSS selectors can be used through the page.$
and page.$$
methods:
const puppeteer = require('puppeteer');
async function run() {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto("https://httpbin.dev/html");
// to get the first matching element:
await page.$("p");
// to get ALL matching elements:
await page.$$("p");
// we can also modify the captured elements immediatly:
// get the text value:
await page.$eval("p", element => element.innerText);
// get attributes attribute:
await page.$eval("a", element => element.href);
// same with multiple elements, like count total appearances:
await page.$$eval("p", elements => elements.length)
browser.close();
}
run();
⚠ It's possible that these commands will try to find elements before the page has fully loaded if it's a dynamic javascript page. For more see How to wait for a page to load in Puppeteer?