Closed milos-simic closed 4 years ago
For each Disqus url I want to scrape name and usernames of the followers. However, the first url is being rendered regardless of the actual url in the request.
Here is my spider:
import scrapy from disqus.items import DisqusItem class DisqusSpider(scrapy.Spider): name = "disqusSpider" start_urls = ["https://disqus.com/by/disqus_sAggacVY39/", "https://disqus.com/by/VladimirUlayanov/", "https://disqus.com/by/Beasleyhillman/", "https://disqus.com/by/Slick312/"] splash_def = {"endpoint" : "render.html", "args" : {"wait" : 10}} def start_requests(self): for url in self.start_urls: yield scrapy.Request(url = url, callback = self.parse_basic, dont_filter = True, meta = { "splash" : self.splash_def, "base_profile_url" : url }) def parse_basic(self, response): name = response.css("h1.cover-profile-name.text-largest.truncate-line::text").extract_first() disqusItem = DisqusItem(name = name) request = scrapy.Request(url = response.meta["base_profile_url"] + "followers/", callback = self.parse_followers, dont_filter = True, meta = { "item" : disqusItem, "base_profile_url" : response.meta["base_profile_url"], "splash": self.splash_def }) print "parse_basic", response.url, request.url yield request def parse_followers(self, response): print "parse_followers", response.meta["base_profile_url"], response.meta["item"] followers = response.css("div.user-info a::attr(href)").extract()
This is the definition of DisqusItem:
DisqusItem
import scrapy class DisqusItem(scrapy.Item): name = scrapy.Field() followers = scrapy.Field()
Here are the settings:
# -*- coding: utf-8 -*- # Scrapy settings for disqus project # BOT_NAME = 'disqus' SPIDER_MODULES = ['disqus.spiders'] NEWSPIDER_MODULE = 'disqus.spiders' ROBOTSTXT_OBEY = False SPLASH_URL = 'http://localhost:8050' DOWNLOADER_MIDDLEWARES = { 'scrapy_splash.SplashCookiesMiddleware': 723, 'scrapy_splash.SplashMiddleware': 725, 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810, } DUPEFILTER_CLASS = 'scrapyjs.SplashAwareDupeFilter' DUPEFILTER_DEBUG = True DOWNLOAD_DELAY = 10
When parse_followers gets executed, response.meta["item"] is always the same. Here is the log:
parse_followers
response.meta["item"]
2017-08-08 17:09:34 [scrapy.core.engine] DEBUG: Crawled (200) <POST http://localhost:8050/render.html> (referer: None) parse_basic https://disqus.com/by/disqus_sAggacVY39/ https://disqus.com/by/disqus_sAggacVY39/followers/ 2017-08-08 17:09:42 [scrapy.core.engine] DEBUG: Crawled (200) <POST http://localhost:8050/render.html> (referer: None) parse_basic https://disqus.com/by/disqus_sAggacVY39/ https://disqus.com/by/VladimirUlayanov/followers/ 2017-08-08 17:09:55 [scrapy.core.engine] DEBUG: Crawled (200) <POST http://localhost:8050/render.html> (referer: None) parse_basic https://disqus.com/by/disqus_sAggacVY39/ https://disqus.com/by/Beasleyhillman/followers/ 2017-08-08 17:10:09 [scrapy.core.engine] DEBUG: Crawled (200) <POST http://localhost:8050/render.html> (referer: None) parse_basic https://disqus.com/by/disqus_sAggacVY39/ https://disqus.com/by/Slick312/followers/ 2017-08-08 17:10:21 [scrapy.core.engine] DEBUG: Crawled (200) <POST http://localhost:8050/render.html> (referer: None) parse_followers https://disqus.com/by/disqus_sAggacVY39/ {'name': u'Trailer Trash'} 2017-08-08 17:10:21 [scrapy.extensions.logstats] INFO: Crawled 5 pages (at 5 pages/min), scraped 0 items (at 0 items/min) 2017-08-08 17:10:36 [scrapy.core.engine] DEBUG: Crawled (200) <POST http://localhost:8050/render.html> (referer: None) parse_followers https://disqus.com/by/VladimirUlayanov/ {'name': u'Trailer Trash'} 2017-08-08 17:10:50 [scrapy.core.engine] DEBUG: Crawled (200) <POST http://localhost:8050/render.html> (referer: None) parse_followers https://disqus.com/by/Beasleyhillman/ {'name': u'Trailer Trash'} 2017-08-08 17:11:03 [scrapy.core.engine] DEBUG: Crawled (200) <POST http://localhost:8050/render.html> (referer: None) parse_followers https://disqus.com/by/Slick312/ {'name': u'Trailer Trash'} 2017-08-08 17:11:03 [scrapy.core.engine] INFO: Closing spider (finished)
What happens if you reduce the URLs to 2 and reverse their order? Do you get a different response?
For each Disqus url I want to scrape name and usernames of the followers. However, the first url is being rendered regardless of the actual url in the request.
Here is my spider:
This is the definition of
DisqusItem
:Here are the settings:
When
parse_followers
gets executed,response.meta["item"]
is always the same. Here is the log: