Selenium Knowledgebase

To click on modal alerts like cookie popups in Selenium we can either find the button and click it or remove the modal elements. Here's how.

To click on a pop-up alert using Selenium the alert_is_present method can be used to wait for and interact with alerts. Here's how.

To block http resources in selenium we need an external proxy. Here's how to setup mitmproxy to block requests and responses in Selenium.

To scroll to the very bottom of the page the javascript evaluation feature can be used within a while loop. Here's how.

To capture background requests and response selenium needs to be extended with Selenium-wire. Here's how to do it.

selenium error "chromedriver executable needs to be in PATH" means that chrome driver is not installed or reachable - here's how to fix it.

selenium error "geckodriver executable needs to be in PATH" means that gecko driver is not installed or reachable - here's how to fix it.

In Selenium, the scrollIntoView JavaScript function can be used to scroll to a specific HTML element. Here's how to use it in Selenium.

To increase Selenium's performance we can block images. To do that with Chrome browser "prefs" launch option can be used. Here's how.

To save and load cookies of a Selenium browser we can use driver.get_cookies() and driver.add_cookies() methods. Here's how to use them.

To select HTML elements by CSS selectors in Selenium the driver.find_element() method can be used with the By.CSS_SELECTOR option. Here's how to do it.

To select HTML elements by CSS selectors in Selenium the driver.find_element() method can be used with the By.XPATH option. Here's how to do it.

To take a web page screenshot using Selenium the driver.save_screenshot() method can be used or element.screenshot() for specific element. Here's how to do it.

To get full web page source in Selenium the driver.page_source property can be used. Here's how to do it in Python and Selenium.

To wait for specific HTML element to load in Selenium the WebDriverWait() object can be used with presence_of_element_located parameters. Here's how to do it.

Related

Provided by Scrapfly

This knowledgebase is provided by Scrapfly data APIs, check us out! 👇

Related Blog Posts

Guide to SeleniumBase — A Better & Easier Selenium
Guide to SeleniumBase — A Better & Easier Selenium

SeleniumBase streamlines browser automation with simple syntax, cross-browser support, and robust features, perfect for testing and web scraping.

Playwright vs Selenium
Playwright vs Selenium

Explore the key differences between Playwright vs Selenium in terms of performance, web scraping, and automation testing for modern web applications.

What is a Headless Browser? Top 5 Headless Browser Tools
What is a Headless Browser? Top 5 Headless Browser Tools

Quick overview of new emerging tech of browser automation - what exactly are these tools and how are they used in web scraping?

How to Scrape With Headless Firefox
How to Scrape With Headless Firefox

Discover how to use headless Firefox with Selenium, Playwright, and Puppeteer for web scraping, including practical examples for each library.

Selenium Wire Tutorial: Intercept Background Requests
Selenium Wire Tutorial: Intercept Background Requests

In this guide, we'll explore web scraping with Selenium Wire. We'll define what it is, how to install it, and how to use it to inspect and manipulate background requests.

Web Scraping Dynamic Web Pages With Scrapy Selenium
Web Scraping Dynamic Web Pages With Scrapy Selenium

Learn how to scrape dynamic web pages with Scrapy Selenium. You will also learn how to use Scrapy Selenium for common scraping use cases, such as waiting for elements, clicking buttons and scrolling.

How to Use Headless Browser Chrome Extensions for Web Scraping
How to Use Headless Browser Chrome Extensions for Web Scraping

In this article, we'll explore different useful Chrome extensions for web scraping. We'll also explain how to install Chrome extensions with various headless browser libraries, such as Selenium, Playwright and Puppeteer.

Intro to Web Scraping Using Selenium Grid
Intro to Web Scraping Using Selenium Grid

In this guide, you will learn about installing and configuring Selenium Grid with Docker and how to use it for web scraping at scale.

How to Scrape Google Maps
How to Scrape Google Maps

We'll take a look at to find businesses through Google Maps search system and how to scrape their details using either Selenium, Playwright or ScrapFly's javascript rendering feature - all of that in Python.

Web Scraping with Selenium and Python Tutorial + Example Project
Web Scraping with Selenium and Python Tutorial + Example Project

Introduction to web scraping dynamic javascript powered websites and web apps using Selenium browser automation library and Python.

How to Scrape Dynamic Websites Using Headless Web Browsers
How to Scrape Dynamic Websites Using Headless Web Browsers

Introduction to using web automation tools such as Puppeteer, Playwright, Selenium and ScrapFly to render dynamic websites for web scraping