Using Selenium With Python For Web Scraping



Selenium is a widely used tool for web automation. It comes in handy for automating website tests or helping with web scraping, especially for sites that require javascript to be executed. In this article, I will show you how to get up to speed with Selenium using Python.

What is Selenium?

Selenium’s mission is simple, its purpose is to automate web browsers. If you are in need to always execute the same task on a website. It can be automated with Selenium. This is especially the case when you carry out routine web administration tasks but also when you need to test a website. You can automate it all with Selenium.

With this simple goal, Selenium can be used for many different purposes. For instance web-scraping. Many websites run client-side scripts to present data in an asynchronous way. This can cause issues when you are trying to scrape sites in which data you need is rendered through javascript. Selenium comes to the rescue here by automating the browser to visit the site and run the client-side scripts giving you the required HTML. If you would simply use the python requests package to get HTML from a site that runs client-side code, the rendered HTML won’t be complete.

There are many other cases for using Selenium. In the meantime let’s get to using Selenium with Python.

Python Install Selenium

Why is Python used for web scraping? Python has become the most popular language for web scraping for a number of reasons. These include its flexibility, ease of coding, dynamic typing, large collection of libraries to manipulate data, and support for the most common scraping tools, such as Scrapy, Beautiful Soup, and Selenium. Oct 03, 2018 Summary: We learnt how to scrape a website using Selenium in Python and get large amounts of data. You can carry out multiple unstructured data analytics and find interesting trends, sentiments, etc. Using this data. If anyone is interested in looking at the complete code, here is the link to my Github. Let me know if this was helpful. To align with terms, web scraping, also known as web harvesting, or web data extraction is data scraping used for data extraction from websites. The web scraping script may access the url directly using HTTP requests or through simulating a web browser. The second approach is exactly how selenium works – it simulates a web browser. Web scraping can sometimes be difficult because of the strict policies instituted.

Before you begin you need to download the driver for your particular browser. This article is written using chrome. Head on to the following URL to download the chrome driver to use with selenium by clicking here.

The next step is to install the necessary Selenium python packages to your environment. It can be done using the following pip command:

Selenium 101

To begin using selenium, you need to instantiate a selenium webdriver. This class will then control the web browser and you can take various actions as if you were the one navigating the browser such as navigating to a URL or clicking on a button. Let’s see how to do that using python.

First, import the necessary modules and instantiate a selenium webdriver. You need to provide the path to the chromedriver.exe you downloaded earlier.

After executing the command, a new browser window will open up specifying that it is being controlled by automated testing software.

In some cases, you get an error when chrome opens and needs to disable the extensions to remove the error message. To pass options to chrome when starting it, use the following code.

Now, let’s navigate to a specific URL, in our case that will be google’s homepage by executing the get function.

Locate, Enter a Value to TextBox

What do you do on google? You search! Let’s use selenium to perform an automated search on google. First, you need to learn how to locate items.

Selenium provides many options to do so. You can find web elements by ID, Name, Text and many others. Read on here to get the full list.

We will be locating the textbox by name. Google’s input textbox has a name of q. Let’s find this element with Selenium.

Once this element is found, enter your search to it. We will search for this site by executing the following method.

Lastly, send an “Enter” command as you would from your keyboard.

Wait for an Element to Load

As mentioned earlier, many times the page you are browsing to doesn’t completely load at first, rather it executes client-side code that takes longer to load and you need to wait for these to load before continuing. Selenium provides functionality to achieve this by using the WebDriverWait class. Let’s see how to do this.

Python

TipRanks.com is a site that lets you see the track record and measured performance of any analyst or blogger you come across. We will browse to Apple’s analysis page which upon accessing runs javascript to generate the charts. Our code will wait until these are generated before continuing.

First, we need to import additional modules for our sample such as By, expected_conditions and the WebDriverWait class. ExpectedConditions provide functionality for common conditions that are frequently used when automating web browsers for example to detect the visibility of elements.

After accessing the page, we will wait for a max of 10 seconds until a specific CSS class becomes visible. We are looking for the span.fs-13 that becomes visible until charts complete loading.

Get Page HTML

Once the driver has loaded a page and its rendered completely, either by waiting for elements to load or just navigating to the page. You can extract the page’s rendered HTML quite easily with selenium. This can then be processed using BeautifulSoup or other packages to get information from them.

Run the following command to get the page HTML.

Conclusion

Selenium makes web automation very easy allowing you to perform advanced tasks by automating your web browser. We learned how to get Selenium ready to use with Python and its most important tasks such as navigating to a site, locating elements, entering information and waiting for items to load. Hope this article was helpful and stay tuned for more!

Internet extends fast and modern websites pretty often use dynamic content load mechanisms to provide the best user experience. Still, on the other hand, it becomes harder to extract data from such web pages, as it requires the execution of internal Javascript in the page context while scraping. Let's review several conventional techniques that allow data extraction from dynamic websites using Python.

What is a dynamic website?#

A dynamic website is a type of website that can update or load content after the initial HTML load. So the browser receives basic HTML with JS and then loads content using received Javascript code. Such an approach allows increasing page load speed and prevents reloading the same layout each time you'd like to open a new page.

Usually, dynamic websites use AJAX to load content dynamically, or even the whole site is based on a Single-Page Application (SPA) technology.

In contrast to dynamic websites, we can observe static websites containing all the requested content on the page load.

A great example of a static website is example.com:

The whole content of this website is loaded as a plain HTML while the initial page load.

To demonstrate the basic idea of a dynamic website, we can create a web page that contains dynamically rendered text. It will not include any request to get information, just a render of a different HTML after the page load:

<head>
<script>
window.addEventListener('DOMContentLoaded',function(){
document.getElementById('test').innerHTML='I ❤️ ScrapingAnt'
</script>
<body>
</body>

All we have here is an HTML file with a single <div> in the body that contains text - Web Scraping is hard, but after the page load, that text is replaced with the text generated by the Javascript:

window.addEventListener('DOMContentLoaded',function(){
document.getElementById('test').innerHTML='I ❤️ ScrapingAnt'
</script>

To prove this, let's open this page in the browser and observe a dynamically replaced text:

Alright, so the browser displays a text, and HTML tags wrap this text.
Can't we use BeautifulSoup or LXML to parse it? Let's find out.

Extract data from a dynamic web page#

BeautifulSoup is one of the most popular Python libraries across the Internet for HTML parsing. Almost 80% of web scraping Python tutorials use this library to extract required content from the HTML.

Let's use BeautifulSoup for extracting the text inside <div> from our sample above.

import os
soup = BeautifulSoup(test_file)

This code snippet uses os library to open our test HTML file (test.html) from the local directory and creates an instance of the BeautifulSoup library stored in soup variable. Using the soup we find the tag with id test and extracts text from it.

In the screenshot from the first article part, we've seen that the content of the test page is I ❤️ ScrapingAnt, but the code snippet output is the following:

And the result is different from our expectation (except you've already found out what is going on there). Everything is correct from the BeautifulSoup perspective - it parsed the data from the provided HTML file, but we want to get the same result as the browser renders. The reason is in the dynamic Javascript that not been executed during HTML parsing.

We need the HTML to be run in a browser to see the correct values and then be able to capture those values programmatically.

Below you can find four different ways to execute dynamic website's Javascript and provide valid data for an HTML parser: Selenium, Pyppeteer, Playwright, and Web Scraping API.

Selenuim: web scraping with a webdriver#

Selenium is one of the most popular web browser automation tools for Python. It allows communication with different web browsers by using a special connector - a webdriver.

To use Selenium with Chrome/Chromium, we'll need to download webdriver from the repository and place it into the project folder. Don't forget to install Selenium itself by executing:

Selenium instantiating and scraping flow is the following:

  • define and setup Chrome path variable
  • define and setup Chrome webdriver path variable
  • define browser launch arguments (to use headless mode, proxy, etc.)
  • instantiate a webdriver with defined above options
  • load a webpage via instantiated webdriver

In the code perspective, it looks the following:

from selenium.webdriver.chrome.options import Options

Using Selenium With Python For Web Scraping Using

import os
opts = Options()
# opts.add_argument(' — headless') # Uncomment if the headless version needed
opts.binary_location ='<path to Chrome executable>'
# Set the location of the webdriver
chrome_driver = os.getcwd()+'<Chrome webdriver filename>'
# Instantiate a webdriver
driver = webdriver.Chrome(options=opts, executable_path=chrome_driver)
# Load the HTML page
soup = BeautifulSoup(driver.page_source)

And finally, we'll receive the required result:

Selenium usage for dynamic website scraping with Python is not complicated and allows you to choose a specific browser with its version but consists of several moving components that should be maintained. The code itself contains some boilerplate parts like the setup of the browser, webdriver, etc.

I like to use Selenium for my web scraping project, but you can find easier ways to extract data from dynamic web pages below.

Pyppeteer: Python headless Chrome#

Pyppeteer is an unofficial Python port of Puppeteer JavaScript (headless) Chrome/Chromium browser automation library. It is capable of mainly doing the same as Puppeteer can, but using Python instead of NodeJS.

Puppeteer is a high-level API to control headless Chrome, so it allows you to automate actions you're doing manually with the browser: copy page's text, download images, save page as HTML, PDF, etc.

To install Pyppeteer you can execute the following command:

The usage of Pyppeteer for our needs is much simpler than Selenium:

from bs4 import BeautifulSoup
import os
# Launch the browser
page =await browser.newPage()
# Create a URI for our test file
await page.goto(page_path)
soup = BeautifulSoup(page_content)
await browser.close()
asyncio.get_event_loop().run_until_complete(main())

I've tried to comment on every atomic part of the code for a better understanding. However, generally, we've just opened a browser page, loaded a local HTML file into it, and extracted the final rendered HTML for further BeautifulSoup processing.

As we can expect, the result is the following:

We did it again and not worried about finding, downloading, and connecting webdriver to a browser. Though, Pyppeteer looks abandoned and not properly maintained. This situation may change in the nearest future, but I'd suggest looking at the more powerful library.

Playwright: Chromium, Firefox and Webkit browser automation#

Playwright can be considered as an extended Puppeteer, as it allows using more browser types (Chromium, Firefox, and Webkit) to automate modern web app testing and scraping. You can use Playwright API in JavaScript & TypeScript, Python, C# and, Java. And it's excellent, as the original Playwright maintainers support Python.

The API is almost the same as for Pyppeteer, but have sync and async version both.

Installation is simple as always:

playwright install
Programming

Let's rewrite the previous example using Playwright.

from playwright.sync_api import sync_playwright
with sync_playwright()as p:
browser = p.chromium.launch()
# Open a new browser page
page_path ='file://'+ os.getcwd()+'/test.html'
# Open our test file in the opened page
page_content = page.content()
# Process extracted content with BeautifulSoup
print(soup.find(id='test').get_text())
# Close browser

As a good tradition, we can observe our beloved output:

We've gone through several different data extraction methods with Python, but is there any more straightforward way to implement this job? How can we scale our solution and scrape data with several threads?

Meet the web scraping API!

Web Scraping API#

ScrapingAnt web scraping API provides an ability to scrape dynamic websites with only a single API call. It already handles headless Chrome and rotating proxies, so the response provided will already consist of Javascript rendered content. ScrapingAnt's proxy poll prevents blocking and provides a constant and high data extraction success rate.

Using Selenium With Python For Web Scraping

Usage of web scraping API is the simplest option and requires only basic programming skills.

You do not need to maintain the browser, library, proxies, webdrivers, or every other aspect of web scraper and focus on the most exciting part of the work - data analysis.

Using Selenium With Python For Web Scraping Software

As the web scraping API runs on the cloud servers, we have to serve our file somewhere to test it. I've created a repository with a single file: https://github.com/kami4ka/dynamic-website-example/blob/main/index.html

To check it out as HTML, we can use another great tool: HTMLPreview

The final test URL to scrape a dynamic web data has a following look: http://htmlpreview.github.io/?https://github.com/kami4ka/dynamic-website-example/blob/main/index.html

The scraping code itself is the simplest one across all four described libraries. We'll use ScrapingAntClient library to access the web scraping API.

Let's install in first:

And use the installed library:

from scrapingant_client import ScrapingAntClient
# Define URL with a dynamic web content
url ='http://htmlpreview.github.io/?https://github.com/kami4ka/dynamic-website-example/blob/main/index.html'

Selenium Web Scraper

# Create a ScrapingAntClient instance
client = ScrapingAntClient(token='<YOUR-SCRAPINGANT-API-TOKEN>')

Using Selenium With Python For Web Scraping Pdf

# Get the HTML page rendered content
page_content = client.general_request(url).content
# Parse content with BeautifulSoup

Using Selenium With Python For Web Scraping Web

print(soup.find(id='test').get_text())

To get you API token, please, visit Login page to authorize in ScrapingAnt User panel. It's free.

And the result is still the required one.

All the headless browser magic happens in the cloud, so you need to make an API call to get the result.

Check out the documentation for more info about ScrapingAnt API.

Summary#

Today we've checked four free tools that allow scraping dynamic websites with Python. All these libraries use a headless browser (or API with a headless browser) under the hood to correctly render the internal Javascript inside an HTML page. Below you can find links to find out more information about those tools and choose the handiest one:

Happy web scraping, and don't forget to use proxies to avoid blocking 🚀





Comments are closed.