     [Blog](https://scrapfly.io/blog)   /  [headless-browser](https://scrapfly.io/blog/tag/headless-browser)   /  [How to Scrape With Headless Firefox](https://scrapfly.io/blog/posts/how-to-scrape-with-headless-firefox)   # How to Scrape With Headless Firefox

 by [Mazen Ramadan](https://scrapfly.io/blog/author/mazen) Apr 18, 2026 10 min read [\#headless-browser](https://scrapfly.io/blog/tag/headless-browser) [\#nodejs](https://scrapfly.io/blog/tag/nodejs) [\#playwright](https://scrapfly.io/blog/tag/playwright) [\#puppeteer](https://scrapfly.io/blog/tag/puppeteer) [\#python](https://scrapfly.io/blog/tag/python) [\#selenium](https://scrapfly.io/blog/tag/selenium) 

 [  ](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-scrape-with-headless-firefox "Share on LinkedIn")    

 

 

   

In this guide, we'll explain how to install and use headless Firefox with Selenium, Playwright, and Puppeteer. Additionally, we'll go over a practical example of automating each of these libraries for common tasks when scraping web pages.

## Key Takeaways

Master headless firefox web scraping with advanced browser automation tools, JavaScript rendering, and anti-detection techniques using firefox headless.

- Configure headless Firefox with Selenium, Playwright, and Puppeteer for JavaScript-rendered content scraping
- Implement browser automation with proper user agent rotation and fingerprint management
- Handle dynamic content loading and JavaScript challenges that block traditional HTTP scrapers
- Configure browser settings and extensions for advanced web scraping scenarios
- Implement stealth mode configurations to avoid detection and bypass anti-bot measures
- Implement proper error handling and retry logic for browser automation workflows

**Get web scraping tips in your inbox**Trusted by 100K+ developers and 30K+ enterprises. Unsubscribe anytime.







## Headless Firefox With Selenium

Let's start our guide by exploring Selenium headless Firefox. First, we'll have to install [Selenium](https://pypi.org/project/selenium/) using the following `pip` command:

shell```shell
pip install selenium
```



The above command will install Selenium4. It allows us to download the WebDriver binaries automatically, either for Chrome or Firefox:

python```python
from selenium import webdriver 
from selenium.webdriver import FirefoxOptions

# selenium firefox browser options
options = FirefoxOptions()
options.add_argument("-headless")

# initiating the browser and download the webdriver 
with webdriver.Firefox(options=options) as driver: 
    # go to the target web page
    driver.get("https://httpbin.dev/user-agent")

    print(driver.page_source)
    "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:124.0) Gecko/20100101 Firefox/124.0"
```



In the above code, we start by defining basic browser configuration using the `FirefoxOptions` class. Then, we use the `webdriver.Firefox` constructor to create a Selenium Firefox instance, which also downloads the Firefox WebDriver binaries automatically. Finally, we request the target web page and return the HTML content.

The above code uses the `-headless` argument to run Selenium Firefox headless (without a graphical user interface). To run it in the **headful mode**, we can simply remove the argument and add an optional browser **viewport size**:

python```python
from selenium import webdriver 
from selenium.webdriver import FirefoxOptions

# selenium firefox browser options
options = FirefoxOptions()
# browser viewport size
options.add_argument("--width=1920")
options.add_argument("--height=1080")

# initiating the browser and download the webdriver 
with webdriver.Firefox(options=options) as driver: 
    # ...
```



Now that we can spin a Firefox headless browser. We can automate it with the regular Selenium API.



### Basic Selenium Firefox Navigation

In this guide, we'll create a headless Firefox scraping script to automate the login process on [web-scraping.dev/login](https://web-scraping.dev/login). We'll request the target page URL, accept the cookie policy, fill in the login credentials, and click the login button:

python```python
from selenium import webdriver
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By

with webdriver.Firefox() as driver:
    # go to the target web page
    driver.get("https://web-scraping.dev/login?cookies=")

    # define a timeout
    wait = WebDriverWait(driver, timeout=5)

    # accept the cookie policy
    wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "button#cookie-ok")))
    driver.find_element(By.CSS_SELECTOR, "button#cookie-ok").click()

    # wait for the login form
    wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "button[type='submit']")))

    # fill in the login credentails
    username_button = driver.find_element(By.CSS_SELECTOR, "input[name='username']")
    username_button.clear()
    username_button.send_keys("user123")

    password_button = driver.find_element(By.CSS_SELECTOR, "input[name='password']")
    password_button.clear()
    password_button.send_keys("password")

    # click the login submit button
    driver.find_element(By.CSS_SELECTOR, "button[type='submit']").click()

    # wait for an element on the login redirected page
    wait.until(EC.presence_of_element_located((By.CSS_SELECTOR, "div#secret-message")))

    secret_message = driver.find_element(By.CSS_SELECTOR, "div#secret-message").text
    print(f"The secret message is: {secret_message}")
    "The secret message is: 🤫"
```



Here, we define timeouts to wait for specific elements to appear using Selenium's [expected conditions](https://selenium-python.readthedocs.io/waits.html#explicit-waits). Then, we use [find\_element](https://selenium-python.readthedocs.io/locating-elements.html) method to find the elements and click them.

For further details on using Selenium for web scraping, refer to our dedicated guide.

[Web Scraping with Selenium and PythonIntroduction to web scraping dynamic javascript powered websites and web apps using Selenium browser automation library and Python.](https://scrapfly.io/blog/posts/web-scraping-with-selenium-and-python)



## Headless Firefox With Playwright

Let's explore headless Firefox scraping with [Playwright](https://playwright.dev/docs/intro), a popular web browser automation tool with straightforward APIs.

We'll cover using Playwright headless Firefox in both Python and Node.js APIs. First, install Playwright using the following command:

Python

Node.js

shell```shell
pip install playwright
```





shell```shell
npm install playwright
```







Next, install the Firefox WebDriver binaries using the following command:

Python

Node.js

shell```shell
playwright install firefox
```





shell```shell
npx playwright install firefox
```







To start headless Firefox with Playwright, we have to explicitly select the browser type:

Python

Node.js

python```python
from playwright.sync_api import sync_playwright

with sync_playwright() as playwright:
    # launch playwright firefox browser
    browser = playwright.firefox.launch(headless=True)

    # new browser session with the default settings
    context = browser.new_context()

    # new browser tab
    page = context.new_page()

    # request the target page url
    page.goto("https://httpbin.dev/user-agent")

    # get the page HTML
    print(page.content())
    "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:123.0) Gecko/20100101 Firefox/123.0"
```





javascript```javascript
const { firefox } = require('playwright');

(async () => {
    // launch playwright firefox browser
    const browser = await firefox.launch({ headless: true });

    // new browser session with the default settings
    const context = await browser.newContext();

    // new browser tab
    const page = await context.newPage();

    // request the target page url
    await page.goto('https://httpbin.dev/user-agent');

    // get the page HTML
    const content = await page.content();
    console.log(content);
    "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:124.0) Gecko/20100101 Firefox/124.0"

    // close the browser
    await browser.close();
})();
```







Here, we start a Playwright headless Firefox and create a new [browser context](https://playwright.dev/docs/api/class-browsercontext) with the default settings, including [How Headers Are Used to Block Web Scrapers and How to Fix It](https://scrapfly.io/blog/posts/how-to-avoid-web-scraping-blocking-headers) and [How to Scrape in Another Language, Currency or Location](https://scrapfly.io/blog/posts/how-to-scrape-in-another-language-or-currency). Then, we open a [Playwright page](https://playwright.dev/docs/pages) and request the target page URL.

The above code runs the browser instance in the headless mode. To use the headful mode, we can disable the `headless` option and define the browser viewport:

Python

Node.js

python```python
from playwright.sync_api import sync_playwright

with sync_playwright() as playwright:
    # disable the headless mode
    browser = playwright.firefox.launch(headless=False)

    # define the browser viewport
    context = browser.new_context(
        viewport = { "width": 1280, "height": 1024 }
    )
```





javascript```javascript
const { firefox } = require('playwright');

(async () => {
    // disable the headless mode
    const browser = await firefox.launch({ headless: false });

    // define the browser viewport
    const context = await browser.newContext({
        viewport: { width: 1920, height: 1080 }
    });
```







Next, let's explore automating the Playwright Firefox browser for scraping.



### Basic Playwright Firefox Navigation

Let's automate the previous `web-scraping.dev/login` example using Playwright:

Python

Node.js

python```python
from playwright.sync_api import sync_playwright

with sync_playwright() as playwright:
    browser = playwright.firefox.launch(headless=True)
    context = browser.new_context()
    page = context.new_page()

    # request the target web page
    page.goto("https://web-scraping.dev/login?cookies=")

    # accept the cookie policy
    page.click("button#cookie-ok")

    # wait for the login form
    page.wait_for_selector("button[type='submit']")

    # wait for the page to fully load
    page.wait_for_load_state("networkidle")

    # fill in the login credentials
    page.fill("input[name='username']", "user123")
    page.fill("input[name='password']", "password")

    # click the login submit button    
    page.click("button[type='submit']")

    # wait for an element on the login redirected page
    page.wait_for_selector("div#secret-message")

    secret_message = page.inner_text("div#secret-message")
    print(f"The secret message is {secret_message}")
    "The secret message is 🤫"
```





javascript```javascript
const { firefox } = require('playwright');

(async () => {
  const browser = await firefox.launch({ headless: true });
  const context = await browser.newContext();
  const page = await context.newPage();

  // request the target web page
  await page.goto('https://web-scraping.dev/login?cookies=');

  // wait for the page to fully load
  await page.waitForLoadState('networkidle');

  //  accept the cookie policy
  await page.click("button#cookie-ok")

  //  wait for the login form
  page.waitForSelector("button[type='submit']")  

  // fill in the login credentials
  await page.fill("input[name='username']", "user123");
  await page.fill("input[name='password']", "password");

  // click the login submit button    
  await page.click("button[type='submit']");

  // wait for an element on the login redirected page
  await page.waitForSelector("div#secret-message");

  const secretMessage = await page.innerText("div#secret-message");
  console.log(`The secret message is ${secretMessage}`);
  "The secret message is 🤫"

  // close the browser
  await browser.close();
})();
```







Let's break down the above Playwright Firefox scraping code. We start by initiating a Firefox browser and navigating to the target page URL. Next, we use a combination of [Playwright page methods](https://playwright.dev/docs/api/class-page) to:

- Wait for specific selectors, as well as the load state.
- Select, fill, and click elements.

Check our dedicated guide for more details on web scraping with Playwright.

[Web Scraping with Playwright and PythonPlaywright is the new, big browser automation toolkit - can it be used for web scraping? In this introduction article, we'll take a look how can we use Playwright and Python to scrape dynamic websites.](https://scrapfly.io/blog/posts/web-scraping-with-playwright-and-python)



Scrapfly

#### Need a cloud browser for scraping?

Run headless browsers at scale with Scrapfly Cloud Browser — no infrastructure to manage.

[Try Free →](https://scrapfly.io/register)## Headless Firefox With Puppeteer

Finally, let's explore using [Puppeteer](https://www.npmjs.com/package/puppeteer) for headless Firefox. First, install the puppeteer library package using `npm`:

shell```shell
npm install puppeteer
```



Next, install the Firefox browser binaries:

shell```shell
npx puppeteer browsers install firefox
```



To use headless Firefox with Puppeteer, we can specify `firefox` as the product:

javascript```javascript
const puppeteer = require('puppeteer');

(async () => {
    // launch the puppeteer browser 
    const browser = await puppeteer.launch({
        // use firefox as the browser name
        product: 'firefox',
        // run in the headless mode
        headless: true
    })

    // start a browser page
    const page = await browser.newPage();

    // goto the target web page
    await page.goto('https://httpbin.dev/user-agent');

    // get the page HTML
    console.log(await page.content());
    "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:126.0) Gecko/20100101 Firefox/126.0"

    // close the browser
    await browser.close();
})();
```



The above code will run the headless browser in the headless mode. To run Firefox in the headful mode, we can disable the `headless` parameter and define the browser viewport:

javascript```javascript
const puppeteer = require('puppeteer');

(async () => {
    const browser = await puppeteer.launch({
        product: 'firefox',
        headless: false
    })
    const page = await browser.newPage();
    await page.setViewport({width: 1920, height: 1080});
})();
```



Next, let's explore headless Firefox scraping with Puppeteer through our previous example.



### Basic Puppeteer Firefox Navigation

Here's how we can wait, click, and fill elements with Puppeteer:

javascript```javascript
const puppeteer = require('puppeteer');

(async () => {
    const browser = await puppeteer.launch({
        product: 'firefox',
        headless: true
    })

    // create a browser page
    const page = await browser.newPage();

    // go to the target web page
    await page.goto(
        'https://web-scraping.dev/login?cookies=',
        { waitUntil: 'domcontentloaded' }
    );

    // wait for 500 milliseconds        
    await new Promise(resolve => setTimeout(resolve, 500));

    // accept the cookie policy
    await page.click('button#cookie-ok')    

    // wait for the login form
    await page.waitForSelector('button[type="submit"]')

    // fill in the login credentials
    await page.$eval('input[name="username"]', (el, value) => el.value = value, 'user123');
    await page.$eval('input[name="password"]', (el, value) => el.value = value, 'password');    
    await new Promise(resolve => setTimeout(resolve, 500));

    // click the login button and wait for navigation
    await page.click('button[type="submit"]');
    await page.waitForSelector('div#secret-message');    

    secretMessage = await page.$eval('div#secret-message', node => node.innerHTML)
    console.log(`The secret message is ${secretMessage}`);

    // close the browser
    await browser.close();    
})();

```



Let's break down the above Puppeteer scraping execution flow. We start by launching a Puppeteer headless browser and then request the target web page. Then, we click and fill in the required elements while utilizing timeout to wait for them or the page to load. Finally, we use a CSS selector to parse the secret message element from the HTML.

For more details on web scraping with Puppeteer, refer to our dedicated guide, as well as our [Getting started with Puppeteer Stealth](https://scrapfly.io/blog/answers/how-to-use-puppeteer-stealth-what-does-it-do) guide, which prevents Puppeteer scraper blocking.

[How to Web Scrape with Puppeteer and NodeJS in 2026Introduction to using Puppeteer in Nodejs for web scraping dynamic web pages and web apps. Tips and tricks, best practices and example project.](https://scrapfly.io/blog/posts/web-scraping-with-puppeteer-and-nodejs)



## FAQ

How to block resources with Firefox headless browsers?Blocking headless browser resources can significantly increase web scraping speed. For full details, refer to our dedicated article pages on blocking resources for each browser automation library: [How to block resources in Selenium and Python?](https://scrapfly.io/blog/answers/how-to-block-resources-in-selenium), [How to block resources in Playwright and Python?](https://scrapfly.io/blog/answers/how-to-block-resources-in-playwright), and [How to block resources in Puppeteer?](https://scrapfly.io/blog/answers/how-to-block-resources-in-puppeteer).







How to scrape background requests with Firefox headless browser?Inspecting background requests with Firefox is natively supported in [How to capture background requests and responses in Playwright?](https://scrapfly.io/blog/answers/how-to-capture-xhr-requests-playwright) and [How to capture background requests and responses in Puppeteer?](https://scrapfly.io/blog/answers/how-to-capture-xhr-requests-puppeteer). As for Selenium, it's available through [Selenium Wire Tutorial: Intercept Background Requests](https://scrapfly.io/blog/posts/how-to-intercept-background-requests-with-selenium-wire).







Is headless Firefox better than headless Chrome for web scraping?It depends on the use case. Firefox has a smaller fingerprint surface, making it harder for some anti-bot systems to detect. However, Chrome has broader community support with stealth tools like [Undetected ChromeDriver](https://scrapfly.io/blog/posts/web-scraping-without-blocking-using-undetected-chromedriver) and [Puppeteer Stealth](https://scrapfly.io/blog/posts/puppeteer-stealth-complete-guide).









## Summary

In this quick guide, we went through a step-by-step guide on scraping with headless Firefox in Selenium, Playwright, and Puppeteer.

Furthermore, we have explored common browser navigation mechanisms to perform web scraping with Firefox:

- Waiting for load states, page navigation, and selectors.
- Selecting elements, clicking buttons, and filling out forms.



 

    Table of Contents- [Key Takeaways](#key-takeaways)
- [Headless Firefox With Selenium](#headless-firefox-with-selenium)
- [Basic Selenium Firefox Navigation](#basic-selenium-firefox-navigation)
- [Headless Firefox With Playwright](#headless-firefox-with-playwright)
- [Basic Playwright Firefox Navigation](#basic-playwright-firefox-navigation)
- [Headless Firefox With Puppeteer](#headless-firefox-with-puppeteer)
- [Basic Puppeteer Firefox Navigation](#basic-puppeteer-firefox-navigation)
- [FAQ](#faq)
- [Summary](#summary)
 
    Join the Newsletter  Get monthly web scraping insights 

 

  



Scale Your Web Scraping

Anti-bot bypass, browser rendering, and rotating proxies, all in one API. Start with 1,000 free credits.

  No credit card required  1,000 free API credits  Anti-bot bypass included 

 [Start Free](https://scrapfly.io/register) [View Docs](https://scrapfly.io/docs/onboarding) 

 Not ready? Get our newsletter instead. 

 

## Explore this Article with AI

 [ ChatGPT ](https://chat.openai.com/?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-scrape-with-headless-firefox) [ Gemini ](https://www.google.com/search?udm=50&aep=11&q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-scrape-with-headless-firefox) [ Grok ](https://x.com/i/grok?text=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-scrape-with-headless-firefox) [ Perplexity ](https://www.perplexity.ai/search/new?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-scrape-with-headless-firefox) [ Claude ](https://claude.ai/new?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-scrape-with-headless-firefox) 



 ## Related Articles

 [  

 nodejs headless-browser 

### How to Web Scrape with Puppeteer and NodeJS in 2026

Introduction to using Puppeteer in Nodejs for web scraping dynamic web pages and web apps. Tips and tricks, best practic...

 

 ](https://scrapfly.io/blog/posts/web-scraping-with-puppeteer-and-nodejs) [  

 python headless-browser 

### Web Scraping with Selenium and Python

Introduction to web scraping dynamic javascript powered websites and web apps using Selenium browser automation library ...

 

 ](https://scrapfly.io/blog/posts/web-scraping-with-selenium-and-python) [  

 python nodejs 

### How to use Headless Chrome Extensions for Web Scraping

In this article, we'll explore different useful Chrome extensions for web scraping. We'll also explain how to install Ch...

 

 ](https://scrapfly.io/blog/posts/how-to-use-browser-extensions-with-playwright-puppeteer-and-selenium) 

  ## Related Questions

- [ Q How to take screenshots in NodeJS? ](https://scrapfly.io/blog/answers/how-to-take-screenshots-nodejs)
- [ Q How to get page source in Selenium? ](https://scrapfly.io/blog/answers/how-to-get-page-source-in-selenium)
- [ Q How to Copy as cURL With Firefox? ](https://scrapfly.io/blog/answers/how-to-copy-as-curl-with-firefox)
- [ Q How to save and load cookies in Selenium? ](https://scrapfly.io/blog/answers/how-to-save-and-load-cookies-in-selenium)
 
  



   



 Run headless browsers at scale, **1,000 free credits** [Start Free](https://scrapfly.io/register)