{"id":158613,"date":"2019-10-28T21:02:41","date_gmt":"2019-10-29T01:02:41","guid":{"rendered":"https:\/\/www.countingpips.com\/?p=158613"},"modified":"2019-10-28T21:05:32","modified_gmt":"2019-10-29T01:05:32","slug":"web-scraping-with-scrapy-advanced-examples","status":"publish","type":"post","link":"https:\/\/www.investmacro.com\/forex\/2019\/10\/web-scraping-with-scrapy-advanced-examples\/","title":{"rendered":"Web Scraping with Scrapy: Advanced Examples"},"content":{"rendered":"<div id=\"inves-1949861217\" class=\"inves-below-title-posts inves-entity-placement\"><div id =\"posts_date_custom\"><div align=\"left\">October 28, 2019<\/div><hr style=\"border: none; border-bottom: 3px solid black;\">\r\n<\/div><\/div><p><strong>By Zac Clancy for <a href=\"https:\/\/kite.com\/\" target=\"_blank\" rel=\"noopener noreferrer\">Kite.com<\/a><\/strong><\/p>\n<h3>Table of Contents<\/h3>\n<ul>\n<li>Introduction to Web Scraping<\/li>\n<li>Scrapy concepts<\/li>\n<li>Reddit-less front page<\/li>\n<li>Extracting amazon price data<\/li>\n<li>Considerations at scale<\/li>\n<\/ul>\n<h2><span id=\"introduction\" class=\"blog__contents__anchor\"><\/span>Introduction to web scraping<\/h2>\n<p>Web scraping is one of the tools at a developer\u2019s disposal when looking to gather data from the internet. While consuming data via an API has become commonplace, most of the websites online don\u2019t have an API for delivering data to consumers. In order to access the data they\u2019re looking for, web scrapers and crawlers read a website\u2019s pages and feeds, analyzing the site\u2019s structure and markup language for clues. Generally speaking, information collected from scraping is fed into other programs for validation, cleaning, and input into a datastore or its fed onto other processes such as natural language processing (NLP) toolchains or machine learning (ML) models. There are a few Python packages we could use to illustrate with, but we\u2019ll focus on Scrapy for these examples. Scrapy makes it very easy for us to quickly prototype and develop web scrapers with Python.<\/p>\n<h2><span id=\"concepts\" class=\"blog__contents__anchor\"><\/span>Scrapy concepts<\/h2>\n<p>Before we start looking at specific examples and use cases, let\u2019s brush up a bit on Scrapy and how it works.<\/p>\n<p><strong>Spiders:<\/strong>\u00a0Scrapy uses\u00a0<i>Spiders<\/i>\u00a0to define how a site (or a bunch of sites) should be scraped for information. Scrapy lets us determine how we want the spider to crawl, what information we want to extract, and how we can extract it. Specifically, Spiders are Python classes where we\u2019ll put all of our custom logic and behavior.<\/p>\n<div class=\"code-block python\">\n<pre><code class=\"Python hljs livecodeserver\"><span class=\"hljs-keyword\">import<\/span> scrapy\r\n\r\n<span class=\"hljs-class\"><span class=\"hljs-keyword\">class<\/span> <span class=\"hljs-title\">NewsSpider<\/span><span class=\"hljs-params\">(scrapy.Spider)<\/span>:<\/span>\r\n\tname = <span class=\"hljs-string\">'news'<\/span>\r\n\t... <\/code><\/pre>\n<\/div>\n<div class=\"content-block\">\n<p><strong>Selectors:<\/strong>\u00a0<i>Selectors<\/i>\u00a0are Scrapy\u2019s mechanisms for finding data within the website\u2019s pages. They\u2019re called\u00a0<i>selectors<\/i>\u00a0because they provide an interface for \u201cselecting\u201d certain parts of the HTML page, and these selectors can be in either CSS or XPath expressions.<\/p>\n<p><strong>Items:<\/strong>\u00a0<i>Items<\/i>\u00a0are the data that is extracted from selectors in a common data model. Since our goal is a structured result from unstructured inputs, Scrapy provides an Item class which we can use to define how our scraped data should be structured and what fields it should have.<\/p>\n<\/div>\n<div class=\"code-block python\">\n<pre><code class=\"Python hljs livecodeserver\"><span class=\"hljs-keyword\">import<\/span> scrapy\r\n\r\n<span class=\"hljs-class\"><span class=\"hljs-keyword\">class<\/span> <span class=\"hljs-title\">Article<\/span><span class=\"hljs-params\">(scrapy.Item)<\/span>:<\/span>\r\n\theadline = scrapy.Field()\r\n\t...<\/code><\/pre>\n<\/div>\n<div class=\"content-block\">\n<h2><span id=\"front\" class=\"blog__contents__anchor\"><\/span>Reddit-less front page<\/h2>\n<p>Suppose we love the images posted to Reddit, but don\u2019t want any of the comments or self posts. We can use Scrapy to make a Reddit Spider that will fetch all the photos from the front page and put them on our own HTML page which we can then browse instead of Reddit.<\/p>\n<p>To start, we\u2019ll create a\u00a0<code>RedditSpider<\/code>\u00a0which we can use traverse the front page and handle custom behavior.<\/p>\n<\/div>\n<div class=\"code-block python\">\n<pre><code class=\"Python hljs livecodeserver\"><span class=\"hljs-keyword\">import<\/span> scrapy\r\n\r\n<span class=\"hljs-class\"><span class=\"hljs-keyword\">class<\/span> <span class=\"hljs-title\">RedditSpider<\/span><span class=\"hljs-params\">(scrapy.Spider)<\/span>:<\/span>\r\n\tname = <span class=\"hljs-string\">'reddit'<\/span>\r\n\tstart_urls = [\r\n    \t    <span class=\"hljs-string\">'https:\/\/www.reddit.com'<\/span>\r\n\t]<\/code><\/pre>\n<\/div>\n<div class=\"content-block\">\n<p>Above, we\u2019ve defined a\u00a0<code>RedditSpider<\/code>, inheriting Scrapy\u2019s Spider. We\u2019ve named it\u00a0<code>reddit<\/code>\u00a0and have populated the class\u2019\u00a0<code>start_urls<\/code>\u00a0attribute with a URL to Reddit from which we\u2019ll extract the images.<\/p>\n<p>At this point, we\u2019ll need to begin defining our parsing logic. We need to figure out an expression that the\u00a0<code>RedditSpider<\/code>\u00a0can use to determine whether it\u2019s found an image. If we look at\u00a0<a href=\"https:\/\/www.reddit.com\/robots.txt\" target=\"_blank\" rel=\"noopener noreferrer\">Reddit\u2019s robots.txt<\/a>\u00a0file, we can see that our spider can\u2019t crawl any comment pages without being in violation of the robots.txt file, so we\u2019ll need to grab our image URLs without following through to the comment pages.<\/p>\n<p>By looking at Reddit, we can see that external links are included on the homepage directly next to the post\u2019s title. We\u2019ll update\u00a0<code>RedditSpider<\/code>\u00a0to include a parser to grab this URL. Reddit includes the external URL as a link on the page, so we should be able to just loop through the links on the page and find URLs that are for images.<\/p>\n<\/div>\n<div class=\"code-block python\">\n<pre><code class=\"Python hljs livecodeserver\"><span class=\"hljs-class\"><span class=\"hljs-keyword\">class<\/span> <span class=\"hljs-title\">RedditSpider<\/span><span class=\"hljs-params\">(scrapy.Spider)<\/span>:<\/span>\r\n    ...\r\n    <span class=\"hljs-function\"><span class=\"hljs-keyword\">def<\/span> <span class=\"hljs-title\">parse<\/span><span class=\"hljs-params\">(self, response)<\/span>:<\/span>\r\n       links = response.xpath(<span class=\"hljs-string\">'\/\/a\/@href'<\/span>)\r\n    \t <span class=\"hljs-keyword\">for<\/span> link <span class=\"hljs-keyword\">in<\/span> links:\r\n           ...<\/code><\/pre>\n<\/div>\n<div class=\"content-block\">\n<p>In a parse method on our\u00a0<code>RedditSpider<\/code>\u00a0class, I\u2019ve started to define how we\u2019ll be parsing our response for results. To start, we grab all of the href attributes from the page\u2019s links using a\u00a0<a href=\"https:\/\/docs.scrapy.org\/en\/latest\/topics\/selectors.html#working-with-xpaths\" target=\"_blank\" rel=\"noopener noreferrer\">basic XPath selector<\/a>. Now that we\u2019re enumerating the page\u2019s links, we can start to analyze the links for images.<\/p>\n<\/div>\n<div class=\"code-block python\">\n<pre><code class=\"Python hljs livecodeserver\"><span class=\"hljs-function\"><span class=\"hljs-keyword\">def<\/span> <span class=\"hljs-title\">parse<\/span><span class=\"hljs-params\">(self, response)<\/span>:<\/span>\r\n    links = response.xpath(<span class=\"hljs-string\">'\/\/a\/@href'<\/span>)\r\n    <span class=\"hljs-keyword\">for<\/span> link <span class=\"hljs-keyword\">in<\/span> links:\r\n        <span class=\"hljs-comment\"># Extract the URL text from the element<\/span>\r\n        url = link.get()\r\n        <span class=\"hljs-comment\"># Check if the URL contains an image extension<\/span>\r\n        <span class=\"hljs-keyword\">if<\/span> any(extension <span class=\"hljs-keyword\">in<\/span> url <span class=\"hljs-keyword\">for<\/span> extension <span class=\"hljs-keyword\">in<\/span> [<span class=\"hljs-string\">'.jpg'<\/span>, <span class=\"hljs-string\">'.gif'<\/span>, <span class=\"hljs-string\">'.png'<\/span>]):\r\n            ...<\/code><\/pre>\n<\/div>\n<div class=\"content-block\">\n<p>To actually access the text information from the link\u2019s href attribute, we use Scrapy\u2019s\u00a0<code>.get()<\/code>\u00a0function which will return the link destination as a string. Next, we check to see if the URL contains an image file extension. We use Python\u2019s\u00a0<code>any()<\/code>\u00a0built-in function for this. This isn\u2019t all-encompassing for all image file extensions, but it\u2019s a start. From here we can push our images into a local HTML file for viewing.<\/p>\n<\/div>\n<div class=\"code-block python\">\n<pre><code class=\"Python hljs livecodeserver\"><span class=\"hljs-function\"><span class=\"hljs-keyword\">def<\/span> <span class=\"hljs-title\">parse<\/span><span class=\"hljs-params\">(self, response)<\/span>:<\/span>\r\n    links = response.xpath(<span class=\"hljs-string\">'\/\/img\/@src'<\/span>)\r\n    html = <span class=\"hljs-string\">''<\/span>\r\n\r\n    <span class=\"hljs-keyword\">for<\/span> link <span class=\"hljs-keyword\">in<\/span> links:\r\n        <span class=\"hljs-comment\"># Extract the URL text from the element<\/span>\r\n        url = link.get()\r\n        <span class=\"hljs-comment\"># Check if the URL contains an image extension<\/span>\r\n        <span class=\"hljs-keyword\">if<\/span> any(extension <span class=\"hljs-keyword\">in<\/span> url <span class=\"hljs-keyword\">for<\/span> extension <span class=\"hljs-keyword\">in<\/span> [<span class=\"hljs-string\">'.jpg'<\/span>, <span class=\"hljs-string\">'.gif'<\/span>, <span class=\"hljs-string\">'.png'<\/span>]):\r\n            html += <span class=\"hljs-string\">'''<\/span><span class=\"hljs-string\">\r\n            &lt; a href=\"{url}\" target=\"_blank\"&gt;<\/span><span class=\"hljs-string\">\r\n                &lt; img src=\"{url}\" height=\"33%\" width=\"33%\" \/&gt;<\/span><span class=\"hljs-string\">\r\n            &lt; \/a&gt;<\/span><span class=\"hljs-string\">\r\n            '''<\/span>.format(url=url)\r\n\r\n    \t<span class=\"hljs-comment\"># Open an HTML file, save the results<\/span>\r\n    \t    <span class=\"hljs-keyword\">with<\/span> open(<span class=\"hljs-string\">'frontpage.html'<\/span>, <span class=\"hljs-string\">'a'<\/span>) <span class=\"hljs-keyword\">as<\/span> page:\r\n            page.write(html)\r\n    \t    <span class=\"hljs-comment\"># Close the file<\/span>\r\n    \t    page.close()<\/code><\/pre>\n<\/div>\n<div class=\"content-block\">\n<p>To start, we begin collecting the HTML file contents as a string which will be written to a file called\u00a0<code>frontpage.html<\/code>\u00a0at the end of the process. You\u2019ll notice that instead of pulling the image location from the\u00a0<code>\u2018\/\/a\/@href\/\u2018<\/code>, we\u2019ve updated our\u00a0<i>links<\/i>\u00a0selector to use the image\u2019s src attribute:\u00a0<code>\u2018\/\/img\/@src\u2019<\/code>. This will give us more consistent results, and select only images.<\/p>\n<p>As our\u00a0<i>RedditSpider\u2019s<\/i>\u00a0parser finds images it builds a link with a preview image and dumps the string to our\u00a0<code>html<\/code>\u00a0variable. Once we\u2019ve collected all of the images and generated the HTML, we open the local HTML file (or create it) and overwrite it with our new HTML content before closing the file again with\u00a0<code>page.close()<\/code>. If we run\u00a0<code>scrapy runspider reddit.py<\/code>, we can see that this file is built properly and contains images from Reddit\u2019s front page.<\/p>\n<p>But, it looks like it contains\u00a0<strong>all<\/strong>\u00a0of the images from Reddit\u2019s front page \u2013 not just user-posted content. Let\u2019s update our parse command a bit to blacklist certain domains from our results.<\/p>\n<p>If we look at\u00a0<code>frontpage.html<\/code>, we can see that most of Reddit\u2019s assets come from\u00a0<i><a class=\"vglnk\" href=\"http:\/\/redditstatic.com\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">redditstatic.com<\/a><\/i>\u00a0and\u00a0<i><a class=\"vglnk\" href=\"http:\/\/redditmedia.com\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">redditmedia.com<\/a><\/i>. We\u2019ll just filter those results out and retain everything else. With these updates, our\u00a0<code>RedditSpider<\/code>\u00a0class now looks like the below:<\/p>\n<\/div>\n<div class=\"code-block python\">\n<pre><code class=\"Python hljs livecodeserver\"><span class=\"hljs-keyword\">import<\/span> scrapy\r\n\r\n<span class=\"hljs-class\"><span class=\"hljs-keyword\">class<\/span> <span class=\"hljs-title\">RedditSpider<\/span><span class=\"hljs-params\">(scrapy.Spider)<\/span>:<\/span>\r\n    name = <span class=\"hljs-string\">'reddit'<\/span>\r\n    start_urls = [\r\n        <span class=\"hljs-string\">'https:\/\/www.reddit.com'<\/span>\r\n    ]\r\n\r\n    <span class=\"hljs-function\"><span class=\"hljs-keyword\">def<\/span> <span class=\"hljs-title\">parse<\/span><span class=\"hljs-params\">(self, response)<\/span>:<\/span>\r\n        links = response.xpath(<span class=\"hljs-string\">'\/\/img\/@src'<\/span>)\r\n    \t  html = <span class=\"hljs-string\">''<\/span>\r\n\r\n    \t  <span class=\"hljs-keyword\">for<\/span> link <span class=\"hljs-keyword\">in<\/span> links:\r\n            <span class=\"hljs-comment\"># Extract the URL text from the element<\/span>\r\n        \turl = link.get()\r\n        \t<span class=\"hljs-comment\"># Check if the URL contains an image extension<\/span>\r\n        \t<span class=\"hljs-keyword\">if<\/span> any(extension <span class=\"hljs-keyword\">in<\/span> url <span class=\"hljs-keyword\">for<\/span> extension <span class=\"hljs-keyword\">in<\/span> [<span class=\"hljs-string\">'.jpg'<\/span>, <span class=\"hljs-string\">'.gif'<\/span>, <span class=\"hljs-string\">'.png'<\/span>])\\\r\n               <span class=\"hljs-keyword\">and<\/span> <span class=\"hljs-keyword\">not<\/span> any(domain <span class=\"hljs-keyword\">in<\/span> url <span class=\"hljs-keyword\">for<\/span> domain <span class=\"hljs-keyword\">in<\/span> [<span class=\"hljs-string\">'redditstatic.com'<\/span>, <span class=\"hljs-string\">'redditmedia.com'<\/span>]):\r\n                html += <span class=\"hljs-string\">'''<\/span><span class=\"hljs-string\">\r\n                &lt; a href=\"{url}\" target=\"_blank\"&gt;<\/span><span class=\"hljs-string\">\r\n                    &lt; img src=\"{url}\" height=\"33%\" width=\"33%\" \/&gt;<\/span><span class=\"hljs-string\">\r\n                &lt; \/a&gt;<\/span><span class=\"hljs-string\">\r\n                '''<\/span>.format(url=url)\r\n\r\n    \t   <span class=\"hljs-comment\"># Open an HTML file, save the results<\/span>\r\n    \t       <span class=\"hljs-keyword\">with<\/span> open(<span class=\"hljs-string\">'frontpage.html'<\/span>, <span class=\"hljs-string\">'w'<\/span>) <span class=\"hljs-keyword\">as<\/span> page:\r\n               page.write(html)\r\n\r\n    \t   <span class=\"hljs-comment\"># Close the file<\/span>\r\n    \t   page.close()<\/code><\/pre>\n<\/div>\n<div class=\"content-block\">\n<p>We\u2019re simply adding our domain whitelist to an exclusionary\u00a0<code>any()<\/code>expression. These statements could be tweaked to read from a separate configuration file, local database, or cache \u2013 if need be.<\/p>\n<\/div>\n<div class=\"content-block\">\n<h2><span id=\"data\" class=\"blog__contents__anchor\"><\/span>Extracting Amazon price data<\/h2>\n<p>If you\u2019re running an ecommerce website, intelligence is key. With Scrapy we can easily automate the process of collecting information about our competitors, our market, or our listings.<\/p>\n<p>For this task, we\u2019ll extract pricing data from search listings on Amazon and use the results to provide some basic insights. If we visit\u00a0<a href=\"https:\/\/www.amazon.com\/s?k=paint\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon\u2019s search results page<\/a>\u00a0and inspect it, we notice that Amazon stores the price in a series of divs, most notably using a class called\u00a0<code>.a-offscreen<\/code>. We can formulate a\u00a0<a href=\"http:\/\/doc.scrapy.org\/en\/latest\/topics\/selectors.html#extensions-to-css-selectors\" target=\"_blank\" rel=\"noopener noreferrer\">CSS selector<\/a>\u00a0that extracts the price off the page:<\/p>\n<\/div>\n<div class=\"code-block python\">\n<pre><code class=\"Python hljs livecodeserver\">prices = response.css(<span class=\"hljs-string\">'.a-price .a-offscreen::text'<\/span>).getall()<\/code><\/pre>\n<\/div>\n<div class=\"content-block\">\n<p>With this CSS selector in mind, let\u2019s build our\u00a0<code>AmazonSpider<\/code>.<\/p>\n<\/div>\n<div class=\"code-block python\">\n<pre><code class=\"Python hljs livecodeserver\"><span class=\"hljs-keyword\">import<\/span> scrapy\r\n\r\n<span class=\"hljs-keyword\">from<\/span> re <span class=\"hljs-keyword\">import<\/span> sub\r\n<span class=\"hljs-keyword\">from<\/span> decimal <span class=\"hljs-keyword\">import<\/span> Decimal\r\n\r\n\r\n<span class=\"hljs-function\"><span class=\"hljs-keyword\">def<\/span> <span class=\"hljs-title\">convert_money<\/span><span class=\"hljs-params\">(money)<\/span>:<\/span>\r\n\t<span class=\"hljs-keyword\">return<\/span> Decimal(sub(<span class=\"hljs-string\">r'[^\\d.]'<\/span>, <span class=\"hljs-string\">''<\/span>, money))\r\n\r\n\r\n<span class=\"hljs-class\"><span class=\"hljs-keyword\">class<\/span> <span class=\"hljs-title\">AmazonSpider<\/span><span class=\"hljs-params\">(scrapy.Spider)<\/span>:<\/span>\r\n\tname = <span class=\"hljs-string\">'amazon'<\/span>\r\n\tstart_urls = [\r\n    \t    <span class=\"hljs-string\">'https:\/\/www.amazon.com\/s?k=paint'<\/span>\r\n\t]\r\n\r\n\t<span class=\"hljs-function\"><span class=\"hljs-keyword\">def<\/span> <span class=\"hljs-title\">parse<\/span><span class=\"hljs-params\">(self, response)<\/span>:<\/span>\r\n    \t    <span class=\"hljs-comment\"># Find the Amazon price element<\/span>\r\n    \t    prices = response.css(<span class=\"hljs-string\">'.a-price .a-offscreen::text'<\/span>).getall()\r\n\r\n    \t    <span class=\"hljs-comment\"># Initialize some counters and stats objects<\/span>\r\n    \t    stats = dict()\r\n    \t    values = []\r\n\r\n    \t    <span class=\"hljs-keyword\">for<\/span> price <span class=\"hljs-keyword\">in<\/span> prices:\r\n        \t  value = convert_money(price)\r\n        \t  values.append(value)\r\n\r\n    \t    <span class=\"hljs-comment\"># Sort our values before calculating<\/span>\r\n    \t    values.sort()\r\n\r\n    \t    <span class=\"hljs-comment\"># Calculate price statistics<\/span>\r\n    \t    stats[<span class=\"hljs-string\">'average_price'<\/span>] = round(sum(values) \/ len(values), <span class=\"hljs-number\">2<\/span>)\r\n    \t    stats[<span class=\"hljs-string\">'lowest_price'<\/span>] = values[<span class=\"hljs-number\">0<\/span>]\r\n    \t    stats[<span class=\"hljs-string\">'highest_price'<\/span>] = values[<span class=\"hljs-number\">-1<\/span>]\r\n    \t    Stats[<span class=\"hljs-string\">'total_prices'<\/span>] = len(values)\r\n\r\n    \t    print(stats)<\/code><\/pre>\n<\/div>\n<div class=\"content-block\">\n<p>A few things to note about our\u00a0<code>AmazonSpider<\/code>\u00a0class: <strong>convert_money():<\/strong>\u00a0This helper simply converts strings formatted like \u2018$45.67\u2019 and casts them to a Python Decimal type which can be used for computations and avoids issues with locale by not including a \u2018$\u2019 anywhere in the regular expression. <strong>getall():<\/strong>\u00a0The\u00a0<code>.getall()<\/code>\u00a0function is a Scrapy function that works similar to the\u00a0<code>.get()<\/code>\u00a0function we used before, but this returns all the extracted values as a list which we can work with. Running the command\u00a0<code>scrapy runspider amazon.py<\/code>\u00a0in the project folder will dump output resembling the following:<\/p>\n<\/div>\n<div class=\"code-block python\">\n<pre><code class=\"Python hljs livecodeserver\">{<span class=\"hljs-string\">'average_price'<\/span>: Decimal(<span class=\"hljs-string\">'38.23'<\/span>), <span class=\"hljs-string\">'lowest_price'<\/span>: Decimal(<span class=\"hljs-string\">'3.63'<\/span>), <span class=\"hljs-string\">'highest_price'<\/span>: Decimal(<span class=\"hljs-string\">'689.95'<\/span>), <span class=\"hljs-string\">'total_prices'<\/span>: <span class=\"hljs-number\">58<\/span>}<\/code><\/pre>\n<\/div>\n<div class=\"content-block\">\n<p>It\u2019s easy to imagine building a dashboard that allows you to store scraped values in a datastore and visualize data as you see fit.<\/p>\n<h2><span id=\"scale\" class=\"blog__contents__anchor\"><\/span>Considerations at scale<\/h2>\n<p>As you build more web crawlers and you continue to follow more advanced scraping workflows you\u2019ll likely notice a few things:<\/p>\n<ol>\n<li>Sites change, now more than ever.<\/li>\n<li>Getting consistent results across thousands of pages is tricky.<\/li>\n<li>Performance considerations can be crucial.<\/li>\n<\/ol>\n<h3>Sites change, now more than ever<\/h3>\n<p>On occasion, AliExpress for example, will return a login page rather than search listings. Sometimes Amazon will decide to raise a Captcha, or Twitter will return an error. While these errors can sometimes simply be flickers, others will require a complete re-architecture of your web scrapers. Nowadays, modern front-end frameworks are oftentimes pre-compiled for the browser which can mangle class names and ID strings, sometimes a designer or developer will change an HTML class name during a redesign. It\u2019s important that our Scrapy crawlers are resilient, but keep in mind that changes will occur over time.<\/p>\n<h3>Getting consistent results across thousands of pages is tricky<\/h3>\n<p>Slight variations of user-inputted text can really add up. Think of all of the different spellings and capitalizations you may encounter in just usernames. Pre-processing text, normalizing text, and standardizing text before performing an action or storing the value is best practice before most NLP or ML software processes for best results.<\/p>\n<h3>Performance considerations can be crucial<\/h3>\n<p>You\u2019ll want to make sure you\u2019re operating at least moderately efficiently before attempting to process 10,000 websites from your laptop one night. As your dataset grows it becomes more and more costly to manipulate it in terms of memory or processing power. In a similar regard, you may want to extract the text from one news article at a time, rather than downloading all 10,000 articles at once. As we\u2019ve seen in this tutorial, performing advanced scraping operations is actually quite easy using Scrapy\u2019s framework. Some advanced next steps might include loading selectors from a database and scraping using very generic Spider classes, or by using proxies or modified user-agents to see if the HTML changes based on location or device type. Scraping in the real world becomes complicated because of all the edge cases, Scrapy provides an easy way to build this logic in Python.<\/p>\n<p class=\"blog__content--footer\">This post is a part of Kite\u2019s new series on Python. You can check out the code from this and other posts on our\u00a0<a href=\"https:\/\/github.com\/kiteco\/kite-python-blog-post-code\" target=\"_blank\" rel=\"noopener noreferrer\">GitHub repository<\/a>.<\/p>\n<p>This <a href=\"https:\/\/kite.com\/blog\/python\/web-scraping-scrapy\/\" target=\"_blank\" rel=\"noopener noreferrer\">article<\/a> originally appeared on <a href=\"https:\/\/kite.com\" target=\"_blank\" rel=\"noopener noreferrer\">Kite.com<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>By Zac Clancy for Kite.com Table of Contents Introduction to Web Scraping Scrapy concepts Reddit-less front page Extracting amazon price data Considerations at scale Introduction to web scraping Web scraping is one of the tools at a developer\u2019s disposal when looking to gather data from the internet. While consuming data via an API has become [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[],"tags":[],"class_list":["post-158613","post","type-post","status-publish","format-standard","hentry","no-post-thumbnail"],"_links":{"self":[{"href":"https:\/\/www.investmacro.com\/forex\/wp-json\/wp\/v2\/posts\/158613","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.investmacro.com\/forex\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.investmacro.com\/forex\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.investmacro.com\/forex\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.investmacro.com\/forex\/wp-json\/wp\/v2\/comments?post=158613"}],"version-history":[{"count":2,"href":"https:\/\/www.investmacro.com\/forex\/wp-json\/wp\/v2\/posts\/158613\/revisions"}],"predecessor-version":[{"id":158615,"href":"https:\/\/www.investmacro.com\/forex\/wp-json\/wp\/v2\/posts\/158613\/revisions\/158615"}],"wp:attachment":[{"href":"https:\/\/www.investmacro.com\/forex\/wp-json\/wp\/v2\/media?parent=158613"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.investmacro.com\/forex\/wp-json\/wp\/v2\/categories?post=158613"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.investmacro.com\/forex\/wp-json\/wp\/v2\/tags?post=158613"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}