It is quite obvious that so-called deep learning is in fashion, especially in some fields of computer vision. In this post we would like to quantitatively evaluate whether this assertion is indeed true, and learn to do web scraping on the way.
Web Scraping consist in extracting useful information from websites by a computer program. In this post I’ll show a very basic example of web scraping written in Python in order to get the titles of the Computer Vision papers available in the open access repository of CVF. We will then count the number of title with deep-learning-related words in them.
Maybe web scraping is an overkill for this example, but it might come in handy for other applications.
First you’ll need to install these dependencies: lxml,
Python requests, and for more advanced examples Beautiful soup.
And here the code:
As you can see, we scrape four of the last major vision conferences. From the respective websites, we get all titles (class ptitle
), and then we count whether they have deep-learning-related words in them.
And here the results:
And the plot:
Quite impressive, right? So remember to put some of these words on the title of your next CVPR submission. :)