Selenium: geckodriver executable needs to be in PATH?

by scrapecrow Dec 19, 2022

Selenium is a popular web browser automation library used for web scraping. To run, however, Selenium needs special web browser executables called drivers. For example, to run Firefox web browser Selenium needs geckodriver to be installed. Without it a generic exception will be raised:

selenium.common.exceptions.WebDriverException: Message: 'geckodriver' executable needs to be in PATH.

This can also mean that the geckodriver is installed but Selenium can't find it. To fix this the geckodriver location should be added to the PATH environment variable:

$ export PATH=$PATH:/location/where/geckodriver/is/

Alternatively, we can specify the driver directly in the Selenium initiation code:

from selenium import webdriver
driver = webdriver.Firefox(executable_path=r'your\path\geckodriver.exe')
driver.get('https://scrapfly.io/')

Related Articles

How to Scrape With Headless Firefox

Discover how to use headless Firefox with Selenium, Playwright, and Puppeteer for web scraping, including practical examples for each library.

HEADLESS-BROWSER
PUPPETEER
SELENIUM
NODEJS
PLAYWRIGHT
PYTHON
How to Scrape With Headless Firefox

Selenium Wire Tutorial: Intercept Background Requests

In this guide, we'll explore web scraping with Selenium Wire. We'll define what it is, how to install it, and how to use it to inspect and manipulate background requests.

PYTHON
HEADLESS-BROWSER
SELENIUM
TOOLS
Selenium Wire Tutorial: Intercept Background Requests

Web Scraping Dynamic Web Pages With Scrapy Selenium

Learn how to scrape dynamic web pages with Scrapy Selenium. You will also learn how to use Scrapy Selenium for common scraping use cases, such as waiting for elements, clicking buttons and scrolling.

PYTHON
SCRAPY
HEADLESS-BROWSER
SELENIUM
Web Scraping Dynamic Web Pages With Scrapy Selenium

How to use Headless Chrome Extensions for Web Scraping

In this article, we'll explore different useful Chrome extensions for web scraping. We'll also explain how to install Chrome extensions with various headless browser libraries, such as Selenium, Playwright and Puppeteer.

PYTHON
NODEJS
TOOLS
PLAYWRIGHT
PUPPETEER
SELENIUM
How to use Headless Chrome Extensions for Web Scraping

Intro to Web Scraping Using Selenium Grid

In this guide, you will learn about installing and configuring Selenium Grid with Docker and how to use it for web scraping at scale.

SELENIUM
PYTHON
Intro to Web Scraping Using Selenium Grid

How to Scrape Google Maps

We'll take a look at to find businesses through Google Maps search system and how to scrape their details using either Selenium, Playwright or ScrapFly's javascript rendering feature - all of that in Python.

SCRAPEGUIDE
PYTHON
SELENIUM
PLAYWRIGHT
How to Scrape Google Maps