Selenium Web Scraping



Redouane Niboucha 5 min read Updated nov 2020 Web Scraping Controlling a web browser from a program can be useful in many scenarios, example use cases are website text automation and web scraping, a very popular framework for this kind of automation is Selenium WebDriver. This selenium tutorial is designed for beginners to learn how to use the python selenium module to perform web scraping, web testing and create website bots. Using Selenium v3.x opening a website in a New Tab through Python is much easier now. We have to induce an WebDriverWait for numberofwindowstobe(2) and then collect the window handles every time we open a new tab/window and finally iterate through the window handles and switchTo.window(newlyopened) as required. Web Scraping JavaScript Generated Pages with Python. This project was created just for educational proposes. The code shows how to do web scraping dynamic content pages generated from Javascript using Python and Selenium. We use as data the NBA site to extract stats information from players and generate a json file with some top 10 rankings. Jan 13, 2019 In this in depth tutorial series, you will learn how to use Selenium + Python to crawl and interact with almost any websites. Selenium is a Web Browser Automation Tool originally designed to.

Imagine what would you do if you could automate all the repetitive and boring activities you perform using internet, like checking every day the first results of Google for a given keyword, or download a bunch of files from different websites.

In this post you’ll learn to use Selenium with Python, a Web Scraping tool that simulates a user surfing the Internet. For example, you can use it to automatically look for Google queries and read the results, log in to your social accounts, simulate a user to test your web application, and anything you find in your daily live that it’s repetitive. The possibilities are infinite! 🙂

*All the code in this post has been tested with Python 2.7 and Python 3.4.

Install and use Selenium

Selenium is a python package that can be installed via pip. I recommend that you install it in a virtual environment (using virtualenv and virtualenvwrapper).

To install selenium, you just need to type:

In this post we are going to initialize a Firefox driver — you can install it by visiting their website. However, if you want to work with Chrome or IE, you can find more information here.

Once you have Selenium and Firefox installed, create a python file, selenium_script.py. We are going to initialize a browser using Selenium:

2
4
6
8
10
12
14
16
18
from selenium.common.exceptions import ElementNotVisibleException
def lookup(driver,query):
try:
box=driver.wait.until(EC.presence_of_element_located(
button=driver.wait.until(EC.element_to_be_clickable(
box.send_keys(query)
button.click()
button=driver.wait.until(EC.visibility_of_element_located(
button.click()
print('Box or Button not found in google.com')
  • the element that raised the exception, button.click() is inside a try statement.
  • if the exception is raised, we look for the second button, using visibility_of_element_located to make sure the element is visible, and then click this button.
  • if at any time, some element is not found within the 5 second period, the TimeoutException is raised and caught by the two end lines of code.
  • Note that the initial button name is “btnK” and the new one is “btnG”.

Method list in Selenium

To sum up, I’ve created a table with the main methods used here.

Note: it’s not a python file — don’t try to run/import it 🙂

2
4
6
from selenium import webdriver
driver=webdriver.Firefox()
driver.quit()

This just initializes a Firefox instance, waits for 5 seconds, and closes it.

Well, that was not very useful…

How about if we go to Google and search for something?

Web Scraping Google with Selenium

Let’s make a script that loads the main Google search page and makes a query to look for “Selenium”:

2
4
6
8
10
12
14
16
18
from selenium.common.exceptions import ElementNotVisibleException
def lookup(driver,query):
try:
box=driver.wait.until(EC.presence_of_element_located(
button=driver.wait.until(EC.element_to_be_clickable(
box.send_keys(query)
button.click()
button=driver.wait.until(EC.visibility_of_element_located(
button.click()
print('Box or Button not found in google.com')
  • the element that raised the exception, button.click() is inside a try statement.
  • if the exception is raised, we look for the second button, using visibility_of_element_located to make sure the element is visible, and then click this button.
  • if at any time, some element is not found within the 5 second period, the TimeoutException is raised and caught by the two end lines of code.
  • Note that the initial button name is “btnK” and the new one is “btnG”.

Method list in Selenium

Selenium Web Scraping

To sum up, I’ve created a table with the main methods used here.

Note: it’s not a python file — don’t try to run/import it 🙂