How to download a file with Puppeteer?

To download files with Puppteer we can either the browser's fetch feature - which will download the file into a javascript variable - or find and click the download button which will download the file to the browser's save directory:

// start puppeteer
const puppeteer = require('puppeteer');
const browser = await puppeteer.launch({ headless: false });
const page = await browser.newPage();

// go to url
await page.goto("https://httpbin.dev/");

// download file to a javascript variable:
const csvFile = await page.evaluate(() =>
{
// find the url:
    const url = document.querySelector('.download-button').getAttribute('href');
    // download it using javacript fetch:
    return fetch(url, {
        method: 'GET',
        credentials: 'include'
    }).then(r => r.text());
});

Alternatively, we can click the download button using page.click() command:

// start puppeteer
const puppeteer = require('puppeteer');
const browser = await puppeteer.launch({ headless: false });
const page = await browser.newPage();

// set default download directory:
const path = require('path');
await page._client.send('Page.setDownloadBehavior', {
    behavior: 'allow',
    downloadPath: path.resolve('./downloads'), 
});

// go to url
await page.goto("https://httpbin.dev/");
// click on download link
await page.click('.download-button');
Question tagged: Puppeteer

Related Posts

How to Use Chrome Extensions with Playwright, Puppeteer and Selenium

In this article, we'll explore different useful Chrome extensions for web scraping. We'll also explain how to install Chrome extensions with various headless browser libraries, such as Selenium, Playwright and Puppeteer.

Web Scraping With a Headless Browser: Puppeteer

Introduction to using Puppeteer in Nodejs for web scraping dynamic web pages and web apps. Tips and tricks, best practices and example project.

How to Scrape Dynamic Websites Using Headless Web Browsers

Introduction to using web automation tools such as Puppeteer, Playwright, Selenium and ScrapFly to render dynamic websites for web scraping