- Now, to start this task of creating a web scraper with Python, you need to install a module named BeautifulSoup. It can be easily installed using the pip command; pip install beautifulsoup4. Web Scraper with Python Python has a built-in module, named urllib, for working with URLs.
- An easy way to scrape data using Python is using a package or library called Beautiful Soup. Let’s do a pip install and import it like so:!pip install beautifulsoup4 from bs4 import BeautifulSoup.
- Create your first web scraper with Scraper API and Python Recently I come across a tool that takes care of many of the issues you usually face while scraping websites. The tool is called Scraper API which provides an easy to use REST API to scrape a different kind of websites (Simple, JS enabled, Captcha, etc) with quite an ease.
- Oct 01, 2020 Now, to start this task of creating a web scraper with Python, you need to install a module named BeautifulSoup. It can be easily installed using the pip command; pip install beautifulsoup4. Web Scraper with Python Python has a built-in module, named urllib, for working with URLs.
Create your first web scraper with ScrapingBee API and Python Learn how to use cloud based Scraping API to scrape web pages without getting blocked. Planning to write a book about Web Scraping in Python.
Python is a high-level programming language that is used for web development, mobile application development, and also for scraping the web.
Python is considered as the finest programming language for web scraping because it can handle all the crawling processes smoothly. When you combine the capabilities of Python with the security of a web proxy, then you can perform all your scraping activities smoothly without the fear of IP banning.
In this article, you will understand how proxies are used for web scraping with Python. But, first, let’s understand the basics.
What is web scraping?
Web scraping is the method of extracting data from websites. Generally, web scraping is done either by using a HyperText Transfer Protocol (HTTP) request or with the help of a web browser.
Web scraping works by first crawling the URLs and then downloading the page data one by one. All the extracted data is stored in a spreadsheet. You save tons of time when you automate the process of copying and pasting data. You can easily extract data from thousands of URLs based on your requirement to stay ahead of your competitors.
Example of web scraping
An example of a web scraping would be to download a list of all pet parents in California. You can scrape a web directory that lists the name and email ids of people in California who own a pet. You can use web scraping software to do this task for you. The software will crawl all the required URLs and then extract the required data. The extracted data will be kept in a spreadsheet.
Why use a proxy for web scraping?
- Proxy lets you bypass any content related geo-restrictions because you can choose a location of your choice.
- You can place a high number of connection requests without getting banned.
- It increases the speed with which you request and copy data because any issues related to your ISP slowing down your internet speed is reduced.
- Your crawling program can smoothly run and download the data without the risk of getting blocked.
Now that you have understood the basics of web scraping and proxies. Let’s learn how you can perform web scraping using a proxy with the Python programming language.
Configure a proxy for web scraping with Python
Scraping using Python starts by sending an HTTP request. HTTP is based on a client/server model where your Python program (the client) sends a request to the server for seeing the contents of a page and the server returns with a response.
The basic method of sending an HTTP request is to open a socket and send the request manually:
***start of code***
import socket
HOST = ‘www.mysite.com’ # Server hostname or IP address
PORT = 80 # Port
client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = (HOST, PORT)
client_socket.connect(server_address)
request_header = b’GET / HTTP/1.0rnHost: www.mysite.comrnrn’
client_socket.sendall(request_header)
response = ”
while True:
recv = client_socket.recv(1024)
if not recv:
break
response += str(recv)
print(response)
client_socket.close()
***end of code***
You can also send HTTP requests in Python using built-in modules like urllib and urllib2. However, using these modules isn’t easy.
Hence, there is a third-option called Request, which is a simple HTTP library for Python.
You can easily configure proxies with Requests.
Here is the code to enable the use of proxy in Requests:
***start of code***
import requests
proxies = {
“http”: “http://10.XX.XX.10:8000”,
“https”: “http://10.XX.XX.10:8000”,
}
r = requests.get(“http://toscrape.com”, proxies=proxies)
***end of code***
In the proxies section, you have to specify the proxy address and the port address.
If you wish to include sessions and use a proxy at the same time, then you need to use the below code:
***start of code***
import requests
s = requests.Session()
s.proxies = {
“http”: “http://10.XX.XX.10:8000”,
“https”: “http://10.XX.XX.10:8000”,
}
r = s.get(“http://toscrape.com”)
***end of code***
Sometimes, you might need to create a new session and add a proxy. To do this, a session object should be created.
***start of code***
import requests
Python Web Scraper Tutorial
s = requests.Session()
s.proxies = {
“http”: “http://10.XX.XX.10:8000”,
“https”: “http://10.XX.XX.10:8000”,
}
r = s.get(“http://toscrape.com”)
***end of code***
However, using the Requests package might be slow because you can scrape just one URL with every request. Now, imagine you have to scrape 100 URLs then you will have to send the request 100 times only after the previous request is completed.
To solve this problem and to speed up the process, there is another package called the grequests that allows you to send multiple requests at the same time. Grequests is an asynchronous Python API that is widely used for building web apps.
Here is the code that explains the working of grequests. In this code, we are using an array to keep all the URLs to scrape in an array. Let’s suppose we have to scrape 100 URLs. We will keep all the 100 URLs in an array and use the grequest package specifying the batch length to 10. This will require sending just 10 requests to complete the scraping of 100 URLs instead of sending 100 requests.
***start of code***
import grequests
BATCH_LENGTH = 10
# An array having the 100 URLs for scraping
urls = […]
# results will be stored in this empty results array
results = []
while urls:
# this is the first batch of 10 URLs
batch = urls[:BATCH_LENGTH]
# create a set of unsent Requests
rs = (grequests.get(url) for url in batch)
# send all the requests at the same time
batch_results = grequests.map(rs)
# appending results to our main results array
results += batch_results
# removing fetched URLs from urls
urls = urls[BATCH_LENGTH:]
print(results)
# [<Response [200]>, <Response [200]>, …, <Response [200]>, <Response [200]>]
***end of code***
Final Thoughts
Web scraping is a necessity for several businesses, especially eCommerce websites. Real-time data needs to be captured from a variety of sources to make better business decisions at the right time. Python offers different frameworks and libraries that make web scraping easy. You can extract data fast and efficiently. Moreover, it is crucial to use a proxy to hide your machine’s IP address to avoid blacklisting. Python along with a secure proxy should be the base for successful web scraping.
In the last tutorial we learned how to leverage the Scrapy framework to solve common web scraping problems.Today we are going to take a look at Selenium (with Python ❤️ ) in a step-by-step tutorial.
Selenium refers to a number of different open-source projects used for browser automation. It supports bindings for all major programming languages, including our favorite language: Python.
The Selenium API uses the WebDriver protocol to control a web browser, like Chrome, Firefox or Safari. The browser can run either localy or remotely.
At the beginning of the project (almost 20 years ago!) it was mostly used for cross-browser, end-to-end testing (acceptance tests).
Now it is still used for testing, but it is also used as a general browser automation platform. And of course, it us used for web scraping!
Selenium is useful when you have to perform an action on a website such as:
- Clicking on buttons
- Filling forms
- Scrolling
- Taking a screenshot
It is also useful for executing Javascript code. Let's say that you want to scrape a Single Page Application. Plus you haven't found an easy way to directly call the underlying APIs. In this case, Selenium might be what you need.
Installation
We will use Chrome in our example, so make sure you have it installed on your local machine:
selenium
package
To install the Selenium package, as always, I recommend that you create a virtual environment (for example using virtualenv) and then:
Quickstart
Once you have downloaded both Chrome and Chromedriver and installed the Selenium package, you should be ready to start the browser:
This will launch Chrome in headfull mode (like regular Chrome, which is controlled by your Python code).You should see a message stating that the browser is controlled by automated software.
To run Chrome in headless mode (without any graphical user interface), you can run it on a server. See the following example:
The driver.page_source
will return the full page HTML code.
Here are two other interesting WebDriver properties:
driver.title
gets the page's titledriver.current_url
gets the current URL (this can be useful when there are redirections on the website and you need the final URL)
Locating Elements
Locating data on a website is one of the main use cases for Selenium, either for a test suite (making sure that a specific element is present/absent on the page) or to extract data and save it for further analysis (web scraping).
There are many methods available in the Selenium API to select elements on the page. You can use:
- Tag name
- Class name
- IDs
- XPath
- CSS selectors
We recently published an article explaining XPath. Don't hesitate to take a look if you aren't familiar with XPath.
As usual, the easiest way to locate an element is to open your Chrome dev tools and inspect the element that you need.A cool shortcut for this is to highlight the element you want with your mouse and then press Ctrl + Shift + C or on macOS Cmd + Shift + C instead of having to right click + inspect each time:
find_element
There are many ways to locate an element in selenium.Let's say that we want to locate the h1 tag in this HTML:
All these methods also have find_elements
(note the plural) to return a list of elements.
For example, to get all anchors on a page, use the following:
Some elements aren't easily accessible with an ID or a simple class, and that's when you need an XPath expression. You also might have multiple elements with the same class (the ID is supposed to be unique).
XPath is my favorite way of locating elements on a web page. It's a powerful way to extract any element on a page, based on it's absolute position on the DOM, or relative to another element.
WebElement
A WebElement
is a Selenium object representing an HTML element.
There are many actions that you can perform on those HTML elements, here are the most useful:
- Accessing the text of the element with the property
element.text
- Clicking on the element with
element.click()
- Accessing an attribute with
element.get_attribute('class')
- Sending text to an input with:
element.send_keys('mypassword')
There are some other interesting methods like is_displayed()
. This returns True if an element is visible to the user.
It can be interesting to avoid honeypots (like filling hidden inputs).
Making A Web Scraper In Python Pdf
Honeypots are mechanisms used by website owners to detect bots. For example, if an HTML input has the attribute type=hidden
like this:
This input value is supposed to be blank. If a bot is visiting a page and fills all of the inputs on a form with random value, it will also fill the hidden input. A legitimate user would never fill the hidden input value, because it is not rendered by the browser.
That's a classic honeypot.
Full example
Here is a full example using Selenium API methods we just covered.
We are going to log into Hacker News:
In our example, authenticating to Hacker News is not really useful on its own. However, you could imagine creating a bot to automatically post a link to your latest blog post.
In order to authenticate we need to:
- Go to the login page using
driver.get()
- Select the username input using
driver.find_element_by_*
and thenelement.send_keys()
to send text to the input - Follow the same process with the password input
- Click on the login button using
element.click()
Should be easy right? Let's see the code:
Easy, right? Now there is one important thing that is missing here. How do we know if we are logged in?
We could try a couple of things:
- Check for an error message (like “Wrong password”)
- Check for one element on the page that is only displayed once logged in.
So, we're going to check for the logout button. The logout button has the ID “logout” (easy)!
We can't just check if the element is None
because all of the find_element_by_*
raise an exception if the element is not found in the DOM.So we have to use a try/except block and catch the NoSuchElementException
exception:
Taking a screenshot
We could easily take a screenshot using:
Note that a lot of things can go wrong when you take a screenshot with Selenium. First, you have to make sure that the window size is set correctly.Then, you need to make sure that every asynchronous HTTP call made by the frontend Javascript code has finished, and that the page is fully rendered.
In our Hacker News case it's simple and we don't have to worry about these issues.
Waiting for an element to be present
Python Best Web Scraper
Dealing with a website that uses lots of Javascript to render its content can be tricky. These days, more and more sites are using frameworks like Angular, React and Vue.js for their front-end. These front-end frameworks are complicated to deal with because they fire a lot of AJAX calls.
If we had to worry about an asynchronous HTTP call (or many) to an API, there are two ways to solve this:
- Use a
time.sleep(ARBITRARY_TIME)
before taking the screenshot. - Use a
WebDriverWait
object.
If you use a time.sleep()
you will probably use an arbitrary value. The problem is, you're either waiting for too long or not enough.Also the website can load slowly on your local wifi internet connection, but will be 10 times faster on your cloud server.With the WebDriverWait
method you will wait the exact amount of time necessary for your element/data to be loaded.
This will wait five seconds for an element located by the ID “mySuperId” to be loaded.There are many other interesting expected conditions like:
element_to_be_clickable
text_to_be_present_in_element
element_to_be_clickable
You can find more information about this in the Selenium documentation
Executing Javascript
Sometimes, you may need to execute some Javascript on the page. For example, let's say you want to take a screenshot of some information, but you first need to scroll a bit to see it.You can easily do this with Selenium:
Conclusion
I hope you enjoyed this blog post! You should now have a good understanding of how the Selenium API works in Python. If you want to know more about how to scrape the web with Python don't hesitate to take a look at our general Python web scraping guide.
Selenium is often necessary to extract data from websites using lots of Javascript. The problem is that running lots of Selenium/Headless Chrome instances at scale is hard. This is one of the things we solve with ScrapingBee, our web scraping API
Selenium is also an excellent tool to automate almost anything on the web.
If you perform repetitive tasks like filling forms or checking information behind a login form where the website doesn't have an API, it's maybe* a good idea to automate it with Selenium,just don't forget this xkcd: