I’ve been developing Python web scrapers for years now. Python’s simplicity is great for quick prototyping and so many amazing libraries can help you build a scraper and a result parser (Requests, Beautiful Soup, Scrapy, …). Yet once you start looking into your scraper’s performance, Python can be somewhat limited and Go is a great alternative !
Why Go ?
- In general programming interfaces are contracts that have a set of functions to be implemented to fulfill that contract. Go is no different. Go has great support for interfaces and they are implemented in an implicit way.
- Go is a programming language built to resemble a simplified version of the C programming language. It compiles at the machine level. Go was created at Google in 2007 by Robert Griesemer, Rob Pike, and Ken Thompson.
- Scraping framework for extracting the data you need from websites, used for a wide range of applications, like data mining, data processing or archiving.
Python Database Tutorials. This section contains all of our tutorials that are related to working with databases in Python. We cover things like SQL and NoSQL databases and how to interact with them using Python. Scraping framework for extracting the data you need from websites, used for a wide range of applications, like data mining, data processing or archiving.
When you’re trying to speed up information fetching from the Web (for HTML scraping or even for a mere API consumption), 2 ways of optimization are possible:
- speed up the web resource download (e.g. download http://example.com/hello.html)
- speed up the parsing of the information you retrieved (e.g. get all urls available in hello.html)
Parsing can be improved either by reworking your code, or using a more efficient parser like lxml, or allocating more resources to your scraper. Still, parsing optimization is often negligible compared to the real bottleneck, namely network access (i.e. web page downloading).
Consequently the solution is about downloading the web resources in parallel. This is where Go is a great help !
Concurrent programming is a very complicated field, and Go makes it pretty easy. Go is a modern language which was created with concurrency in mind. On the other hand, Python is an older language and writing a concurrent web scraper in Python can be tricky, even if Python has improved a lot in this regard recently.
Go has other advantages, but let’s talk about it in another article !
Install Go
I already made a short tuto about how to install Go on Ubuntu.
If you need to install Go on another platform, feel free to read the official docs.
A simple concurrent scraper
Our scraper will basically try to download a list of web pages we’re giving him first, and check it gets a 200 HTTP status code (meaning the server returned an HTML page without an error). We’re not dealing with HTML results parsing here, since the goal is to focus on the critical point: improving network access performance. It’s your turn to write something now !
Final code
Explanations
This code is a bit longer than what we could do with a language like Python, but as you can see it is still very reasonable. Go is a statically typed language, so we need a couple of more lines dedicated to variables declaration. But please measure how much time the script is taking, and you’ll understand how rewarding it is !
We chose 10 random urls as an example.
Here, the magical keywords enabling us to use concurrency are go
, chan
, and select
:
go
creates a new goroutine, which meansfetchUrl
will be executed within a new concurrent goroutine each time.chan
is the type representing a channel. Channels help us communicate among goroutines (main
being a goroutine itself as well).select ... case
is aswitch ... case
dedicated to receiving messages sent through channels. Program stays here as long as all goroutines have not sent a message (either to say that url fetching is done, or to say that url fetching failed).
We could have made this scraper without any channel, that’s to say create goroutines and not expect a message from them in return (for instance if every goroutine ends up storing information in database). In such a case, our main
goroutine can perfectly end while some goroutines are still working. This is possible because main
does not block other goroutines when it stops. But in real life it is almost always necessary to use channels in order to make our goroutines talk to each other.
Don’t forget to limit speed !
Here speed is our goal. This is not a concern because we’re scraping all different urls. However if you need to scrap the same urls multiple times (like in API consumption for example), you’ll probably have to stay under a certain number of requests per second. In this case, you’ll have to set up a counter (maybe we’ll talk about it in another article !).
Have a nice scraping !
Existe aussi en français | También existe en Español
If you’re here, you probably already know what web scraping is. But on the off chance that you just happened to stumble upon this article, let’s start with a quick refresher on web scraping, and then we’ll move on to goquery.
Web Scraping – a quick introduction
Web Scraping is the automated method of extracting human-readable data output from a website. The specific data is gathered and copied into a central local database for later retrieval or analysis. There is a built-in library in the Go language for scraping HTML web pages, but often there are some methods that are used by websites to prevent web scraping – because it could potentially cause a denial-of-service, incur bandwidth costs to yourself or the website provider, overload log files, or otherwise stress computing resources.
However, there are web scraping techniques like DOM parsing, computer vision and NLP to simulate human browsing on web page content.
GoQuery is a library created by Martin Angers and brings a syntax and a set of features similar to jQuery to the Go language.
jQuery is a fast, small, and feature-rich JavaScript library. It makes things like HTML document traversal and manipulation, event handling, animation, and Ajax much simpler with an easy-to-use API that works across a multitude of browsers.
– jqueryGoQuery makes it easier to parse HTML websites than the default net/html package, using DOM (Document Object Model) parsing.
Installing goquery
Let’s download the package using “go get
“.
A concise manual can be brought up by using the “go doc goquery
” command.
GoLang Web Scraping using goquery
Create a new .go document in your preferred IDE or text editor. Mine’s titled “goquery_program.go”, and you may choose to do the same:
We’ll begin by importing json and goquery, along with ‘log‘ to log any errors. We create a struct called Article with the Title, URL, and Category as metadata of the article.
Within the function main()
, dispatch a GET client request to the URL journaldev.com for scraping the html.
We have already fetched our full html source code from the website. We can dump it to our terminal using the “os” package.
Golang Web Programming
This will output the whole html file along with all tags in the terminal. I’m working on Linux Ubuntu 20.04, so the output display may vary with system.
It also gave a secondary print statement along with a notification that the page was optimized by LiteSpeed Cache:
Number of bytes copied to STDOUT: 151402
Now, let’s store this response in a reader file using goquery:
Now we need to use the Find() function, which takes in a tag, and inputs that as an argument into Each(). The Each function is typically used with an argument i int, and the selection for the specified tag. On clicking “inspect” in the JournalDev website, I saw that my content was in <p> tags. So I defined my Find with only the name of the tag:
- The “fmt” library has been used to print the text.
- The “next” was just to check if the output was being received(like, for debugging) but I think it looks good with the final output.
- The “%d” and “%s” are string format specifiers for Printf.
Web Scraping Example Output
The best thing about coding is the satisfaction when your code outputs exactly what you need, and I think this was to my utmost satisfaction:
I tried to keep this article as generalised as possible when dealing with websites. This method should work for you no matter what website you’re trying to parse !
Web Scraping Golang Vs Python
With that, I will leave you…until next time.