rugantio / fbcrawl

A Facebook crawler
Apache License 2.0
661 stars 229 forks source link

error crawlind results 0 #10

Closed chelsas69 closed 5 years ago

chelsas69 commented 5 years ago

Hi. I am a student and I am starting in the world of web spiders. I have a problem with the code and that is that when you enter the Donal Trump Facebook , the error that appears below is generated. On the other hand, if I introduce a facebook page that has few posts, the error does not appear, but neither does it scratch anything. Could you help me please?

scrapy crawl fb -a email="------------@gmail.com" -a password="--------" -a page="https://mbasic.facebook.com/DonaldTrump" -o donald.csv 2019-01-28 17:40:16 [scrapy.utils.log] INFO: Scrapy 1.5.2 started (bot: fbcrawl) 2019-01-28 17:40:16 [scrapy.utils.log] INFO: Versions: lxml 4.3.0.0, libxml2 2.9.9, cssselect 1.0.3, parsel 1.5.1, w3lib 1.20.0, Twisted 18.9.0, Python 3.6.7 (default, Oct 22 2018, 11:32:17) - [GCC 8.2.0], pyOpenSSL 19.0.0 (OpenSSL 1.1.1a 20 Nov 2018), cryptography 2.1.4, Platform Linux-4.15.0-43-generic-x86_64-with-Ubuntu-18.04-bionic 2019-01-28 17:40:16 [scrapy.crawler] INFO: Overridden settings: {'AUTOTHROTTLE_ENABLED': True, 'BOT_NAME': 'fbcrawl', 'CONCURRENT_REQUESTS': 32, 'CONCURRENT_REQUESTS_PER_DOMAIN': 16, 'COOKIES_ENABLED': False, 'DOWNLOAD_DELAY': 3, 'FEED_EXPORT_ENCODING': 'utf-8', 'FEED_EXPORT_FIELDS': ['source', 'date', 'text', 'reactions', 'likes', 'ahah', 'love', 'wow', 'sigh', 'grrr', 'comments', 'url'], 'FEED_FORMAT': 'csv', 'FEED_URI': 'volcado5.csv', 'HTTPCACHE_ENABLED': True, 'LOG_LEVEL': 'INFO', 'NEWSPIDER_MODULE': 'fbcrawl.spiders', 'SPIDER_MODULES': ['fbcrawl.spiders'], 'TELNETCONSOLE_ENABLED': False} 2019-01-28 17:40:16 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.memusage.MemoryUsage', 'scrapy.extensions.feedexport.FeedExporter', 'scrapy.extensions.logstats.LogStats', 'scrapy.extensions.throttle.AutoThrottle'] 2019-01-28 17:40:16 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'fbcrawl.middlewares.FbcrawlDownloaderMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats', 'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware'] 2019-01-28 17:40:16 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'fbcrawl.middlewares.FbcrawlSpiderMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2019-01-28 17:40:16 [scrapy.middleware] INFO: Enabled item pipelines: ['fbcrawl.pipelines.FbcrawlPipeline'] 2019-01-28 17:40:16 [scrapy.core.engine] INFO: Spider opened 2019-01-28 17:40:16 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2019-01-28 17:40:16 [fb] INFO: Spider opened: fb 2019-01-28 17:40:16 [fb] INFO: Spider opened: fb 2019-01-28 17:40:17 [fb] INFO: Parse function called on https://mbasic.facebook.com/DonaldTrump 2019-01-28 17:40:17 [scrapy.core.scraper] ERROR: Spider error processing <GET https://mbasic.facebook.com/DonaldTrump> (referer: https://mbasic.facebook.com/home.php?refsrc=https%3A%2F%2Fmbasic.facebook.com%2F&m_sess=c2VzczoxMDAwMTA3MDE5OTA1MDY6MzY6Z1BFNktaUWNwR1ZxZ2c6MjoxNTQ4NjkxNjQ1OjE1MDU1OjM5MDE6&_rdr) Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/scrapy/utils/defer.py", line 102, in iter_errback yield next(it) File "/usr/local/lib/python3.6/dist-packages/scrapy/spidermiddlewares/offsite.py", line 30, in process_spider_output for x in result: File "/home/paula/master/fbcrawl/middlewares.py", line 35, in process_spider_output for i in result: File "/usr/local/lib/python3.6/dist-packages/scrapy/spidermiddlewares/referer.py", line 339, in return (_set_referer(r) for r in result or ()) File "/usr/local/lib/python3.6/dist-packages/scrapy/spidermiddlewares/urllength.py", line 37, in return (r for r in result or () if _filter(r)) File "/usr/local/lib/python3.6/dist-packages/scrapy/spidermiddlewares/depth.py", line 58, in return (r for r in result or () if _filter(r)) File "/home/paula/master/fbcrawl/spiders/fbcrawl.py", line 88, in parse_page temp_post = response.urljoin(post[0]) IndexError: list index out of range 2019-01-28 17:40:17 [scrapy.core.engine] INFO: Closing spider (finished) 2019-01-28 17:40:17 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 1625, 'downloader/request_count': 4, 'downloader/request_method_count/GET': 3, 'downloader/request_method_count/POST': 1, 'downloader/response_bytes': 29241, 'downloader/response_count': 4, 'downloader/response_status_count/200': 3, 'downloader/response_status_count/302': 1, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2019, 1, 28, 16, 40, 17, 521668), 'httpcache/hit': 4, 'log_count/ERROR': 1, 'log_count/INFO': 10, 'memusage/max': 49934336, 'memusage/startup': 49934336, 'request_depth_max': 2, 'response_received_count': 3, 'scheduler/dequeued': 4, 'scheduler/dequeued/memory': 4, 'scheduler/enqueued': 4, 'scheduler/enqueued/memory': 4, 'spider_exceptions/IndexError': 1, 'start_time': datetime.datetime(2019, 1, 28, 16, 40, 16, 981032)} 2019-01-28 17:40:17 [scrapy.core.engine] INFO: Spider closed (finished)

rugantio commented 5 years ago

The "page" attribute is the actual page, not the full link, try:

scrapy crawl fb -a email="------------@gmail.com" -a password="--------" -a page="DonaldTrump" -o donald.csv

I should try to be clearer in the docs :+1

chelsas69 commented 5 years ago

Hello again and thanks in advance for answering so fast. I have changed the argument but I still have the same error ... What could be happening? Thank you..

$ scrapy crawl fb -a email="---@gmail.com" -a password="---" -a page="DonaldTrump" -o donald.csv 2019-01-28 19:35:29 [scrapy.utils.log] INFO: Scrapy 1.5.2 started (bot: fbcrawl) 2019-01-28 19:35:29 [scrapy.utils.log] INFO: Versions: lxml 4.3.0.0, libxml2 2.9.9, cssselect 1.0.3, parsel 1.5.1, w3lib 1.20.0, Twisted 18.9.0, Python 3.6.7 (default, Oct 22 2018, 11:32:17) - [GCC 8.2.0], pyOpenSSL 19.0.0 (OpenSSL 1.1.1a 20 Nov 2018), cryptography 2.1.4, Platform Linux-4.15.0-43-generic-x86_64-with-Ubuntu-18.04-bionic 2019-01-28 19:35:29 [scrapy.crawler] INFO: Overridden settings: {'AUTOTHROTTLE_ENABLED': True, 'BOT_NAME': 'fbcrawl', 'CONCURRENT_REQUESTS': 32, 'CONCURRENT_REQUESTS_PER_DOMAIN': 16, 'COOKIES_ENABLED': False, 'DOWNLOAD_DELAY': 3, 'FEED_EXPORT_ENCODING': 'utf-8', 'FEED_EXPORT_FIELDS': ['source', 'date', 'text', 'reactions', 'likes', 'ahah', 'love', 'wow', 'sigh', 'grrr', 'comments', 'url'], 'FEED_FORMAT': 'csv', 'FEED_URI': 'volcado8.csv', 'HTTPCACHE_ENABLED': True, 'LOG_LEVEL': 'INFO', 'NEWSPIDER_MODULE': 'fbcrawl.spiders', 'SPIDER_MODULES': ['fbcrawl.spiders'], 'TELNETCONSOLE_ENABLED': False, 'USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36'} 2019-01-28 19:35:30 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.memusage.MemoryUsage', 'scrapy.extensions.feedexport.FeedExporter', 'scrapy.extensions.logstats.LogStats', 'scrapy.extensions.throttle.AutoThrottle'] 2019-01-28 19:35:30 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'fbcrawl.middlewares.FbcrawlDownloaderMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats', 'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware'] 2019-01-28 19:35:30 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'fbcrawl.middlewares.FbcrawlSpiderMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2019-01-28 19:35:30 [scrapy.middleware] INFO: Enabled item pipelines: ['fbcrawl.pipelines.FbcrawlPipeline'] 2019-01-28 19:35:30 [scrapy.core.engine] INFO: Spider opened 2019-01-28 19:35:30 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2019-01-28 19:35:30 [fb] INFO: Spider opened: fb 2019-01-28 19:35:30 [fb] INFO: Spider opened: fb 2019-01-28 19:35:30 [fb] INFO: Parse function called on https://mbasic.facebook.com/DonaldTrump 2019-01-28 19:35:31 [scrapy.core.scraper] ERROR: Spider error processing <GET https://mbasic.facebook.com/DonaldTrump> (referer: https://mbasic.facebook.com/home.php?refsrc=https%3A%2F%2Fmbasic.facebook.com%2F&m_sess=c2VzczoxMDAwMTA3MDE5OTA1MDY6MzY6Z1BFNktaUWNwR1ZxZ2c6MjoxNTQ4NjkxNjQ1OjE1MDU1OjM5MDE6&_rdr) Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/scrapy/utils/defer.py", line 102, in iter_errback yield next(it) File "/usr/local/lib/python3.6/dist-packages/scrapy/spidermiddlewares/offsite.py", line 30, in process_spider_output for x in result: File "/home/paula/master/fbcrawl/middlewares.py", line 35, in process_spider_output for i in result: File "/usr/local/lib/python3.6/dist-packages/scrapy/spidermiddlewares/referer.py", line 339, in return (_set_referer(r) for r in result or ()) File "/usr/local/lib/python3.6/dist-packages/scrapy/spidermiddlewares/urllength.py", line 37, in return (r for r in result or () if _filter(r)) File "/usr/local/lib/python3.6/dist-packages/scrapy/spidermiddlewares/depth.py", line 58, in return (r for r in result or () if _filter(r)) File "/home/paula/master/fbcrawl/spiders/fbcrawl.py", line 88, in parse_page temp_post = response.urljoin(post[0]) IndexError: list index out of range 2019-01-28 19:35:31 [scrapy.core.engine] INFO: Closing spider (finished) 2019-01-28 19:35:31 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 1945, 'downloader/request_count': 4, 'downloader/request_method_count/GET': 3, 'downloader/request_method_count/POST': 1, 'downloader/response_bytes': 29241, 'downloader/response_count': 4, 'downloader/response_status_count/200': 3, 'downloader/response_status_count/302': 1, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2019, 1, 28, 18, 35, 31, 127515), 'httpcache/hit': 4, 'log_count/ERROR': 1, 'log_count/INFO': 10, 'memusage/max': 50135040, 'memusage/startup': 50135040, 'request_depth_max': 2, 'response_received_count': 3, 'scheduler/dequeued': 4, 'scheduler/dequeued/memory': 4, 'scheduler/enqueued': 4, 'scheduler/enqueued/memory': 4, 'spider_exceptions/IndexError': 1, 'start_time': datetime.datetime(2019, 1, 28, 18, 35, 30, 578568)} 2019-01-28 19:35:31 [scrapy.core.engine] INFO: Spider closed (finished)

rugantio commented 5 years ago

I just noticed that the parser breaks while trying to process a timestamp... I will work on it asap, thx for the heads up

rugantio commented 5 years ago

@chelsas69 I've fixed this issue, it works fine for me now, please give it a try. If it still doesn't work try with another fb account, it might be that fb checks your identity (see #2)

chelsas69 commented 5 years ago

Hello again.

Sorry, but I can not get results. I have created another facebook account. I have noticed that in some facebook profiles, such as Donald Trump, I see an error, which I show you below. And when I request other profiles, no error appears, but it does not scratch data.

First of all, I add the result of the request from Donald Trump's fb, and then another profile, in which it does not scratch data.

root@Root:~/tfm$ scrapy crawl fb -a email="xxxxxx@gmail.com" -a password="xxxxxx" -a page="/DonaldTrump" -o 3.csv 2019-02-12 19:22:04 [scrapy.utils.log] INFO: Scrapy 1.5.2 started (bot: fbcrawl) 2019-02-12 19:22:04 [scrapy.utils.log] INFO: Versions: lxml 4.3.0.0, libxml2 2.9.9, cssselect 1.0.3, parsel 1.5.1, w3lib 1.20.0, Twisted 18.9.0, Python 3.6.7 (default, Oct 22 2018, 11:32:17) - [GCC 8.2.0], pyOpenSSL 19.0.0 (OpenSSL 1.1.1a 20 Nov 2018), cryptography 2.1.4, Platform Linux-4.15.0-43-generic-x86_64-with-Ubuntu-18.04-bionic 2019-02-12 19:22:04 [scrapy.crawler] INFO: Overridden settings: {'AUTOTHROTTLE_ENABLED': True, 'BOT_NAME': 'fbcrawl', 'CONCURRENT_REQUESTS': 32, 'CONCURRENT_REQUESTS_PER_DOMAIN': 16, 'CONCURRENT_REQUESTS_PER_IP': 16, 'COOKIES_ENABLED': False, 'DOWNLOAD_DELAY': 3, 'FEED_EXPORT_ENCODING': 'utf-8', 'FEED_EXPORT_FIELDS': ['source', 'date', 'text', 'reactions', 'likes', 'ahah', 'love', 'wow', 'sigh', 'grrr', 'comments', 'url'], 'FEED_FORMAT': 'csv', 'FEED_URI': '3.csv', 'HTTPCACHE_ENABLED': True, 'LOG_LEVEL': 'INFO', 'NEWSPIDER_MODULE': 'fbcrawl.spiders', 'SPIDER_MODULES': ['fbcrawl.spiders'], 'TELNETCONSOLE_ENABLED': False, 'USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36'} 2019-02-12 19:22:05 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.memusage.MemoryUsage', 'scrapy.extensions.feedexport.FeedExporter', 'scrapy.extensions.logstats.LogStats', 'scrapy.extensions.throttle.AutoThrottle'] 2019-02-12 19:22:05 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'fbcrawl.middlewares.FbcrawlDownloaderMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats', 'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware'] 2019-02-12 19:22:05 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'fbcrawl.middlewares.FbcrawlSpiderMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2019-02-12 19:22:05 [scrapy.middleware] INFO: Enabled item pipelines: ['fbcrawl.pipelines.FbcrawlPipeline'] 2019-02-12 19:22:05 [scrapy.core.engine] INFO: Spider opened 2019-02-12 19:22:05 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2019-02-12 19:22:05 [fb] INFO: Spider opened: fb 2019-02-12 19:22:05 [fb] INFO: Spider opened: fb 2019-02-12 19:22:05 [fb] INFO: Parse function called on https://mbasic.facebook.com/DonaldTrump 2019-02-12 19:22:05 [scrapy.core.scraper] ERROR: Spider error processing <GET https://mbasic.facebook.com/DonaldTrump> (referer: https://mbasic.facebook.com/login.php?next=https%3A%2F%2Fmbasic.facebook.com%2Fhome.php%3Frefsrc%3Dhttps%253A%252F%252Fmbasic.facebook.com%252F&refsrc=https%3A%2F%2Fmbasic.facebook.com%2F&_rdr) Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/scrapy/utils/defer.py", line 102, in iter_errback yield next(it) File "/usr/local/lib/python3.6/dist-packages/scrapy/spidermiddlewares/offsite.py", line 30, in process_spider_output for x in result: File "/home/paula/tfm/fbcrawl/middlewares.py", line 35, in process_spider_output for i in result: File "/usr/local/lib/python3.6/dist-packages/scrapy/spidermiddlewares/referer.py", line 339, in return (_set_referer(r) for r in result or ()) File "/usr/local/lib/python3.6/dist-packages/scrapy/spidermiddlewares/urllength.py", line 37, in return (r for r in result or () if _filter(r)) File "/usr/local/lib/python3.6/dist-packages/scrapy/spidermiddlewares/depth.py", line 58, in return (r for r in result or () if _filter(r)) File "/home/paula/tfm/fbcrawl/spiders/fbcrawl.py", line 87, in parse_page temp_post = response.urljoin(post[0]) IndexError: list index out of range 2019-02-12 19:22:06 [scrapy.core.engine] INFO: Closing spider (finished) 2019-02-12 19:22:06 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 2360, 'downloader/request_count': 5, 'downloader/request_method_count/GET': 4, 'downloader/request_method_count/POST': 1, 'downloader/response_bytes': 24113, 'downloader/response_count': 5, 'downloader/response_status_count/200': 3, 'downloader/response_status_count/302': 2, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2019, 2, 12, 18, 22, 6, 12014), 'httpcache/hit': 5, 'log_count/ERROR': 1, 'log_count/INFO': 10, 'memusage/max': 50298880, 'memusage/startup': 50298880, 'request_depth_max': 2, 'response_received_count': 3, 'scheduler/dequeued': 5, 'scheduler/dequeued/memory': 5, 'scheduler/enqueued': 5, 'scheduler/enqueued/memory': 5, 'spider_exceptions/IndexError': 1, 'start_time': datetime.datetime(2019, 2, 12, 18, 22, 5, 480030)} 2019-02-12 19:22:06 [scrapy.core.engine] INFO: Spider closed (finished)

Here the request for another profile without error, but no result:

root@Root:~/tfm$ scrapy crawl fb -a email="xxxxxx@gmail.com" -a password="xxxxx" -a page="/adamvarro" -o 2.csv 2019-02-12 19:18:11 [scrapy.utils.log] INFO: Scrapy 1.5.2 started (bot: fbcrawl) 2019-02-12 19:18:11 [scrapy.utils.log] INFO: Versions: lxml 4.3.0.0, libxml2 2.9.9, cssselect 1.0.3, parsel 1.5.1, w3lib 1.20.0, Twisted 18.9.0, Python 3.6.7 (default, Oct 22 2018, 11:32:17) - [GCC 8.2.0], pyOpenSSL 19.0.0 (OpenSSL 1.1.1a 20 Nov 2018), cryptography 2.1.4, Platform Linux-4.15.0-43-generic-x86_64-with-Ubuntu-18.04-bionic 2019-02-12 19:18:11 [scrapy.crawler] INFO: Overridden settings: {'AUTOTHROTTLE_ENABLED': True, 'BOT_NAME': 'fbcrawl', 'CONCURRENT_REQUESTS': 32, 'CONCURRENT_REQUESTS_PER_DOMAIN': 16, 'CONCURRENT_REQUESTS_PER_IP': 16, 'COOKIES_ENABLED': False, 'DOWNLOAD_DELAY': 3, 'FEED_EXPORT_ENCODING': 'utf-8', 'FEED_EXPORT_FIELDS': ['source', 'date', 'text', 'reactions', 'likes', 'ahah', 'love', 'wow', 'sigh', 'grrr', 'comments', 'url'], 'FEED_FORMAT': 'csv', 'FEED_URI': '2.csv', 'HTTPCACHE_ENABLED': True, 'LOG_LEVEL': 'INFO', 'NEWSPIDER_MODULE': 'fbcrawl.spiders', 'SPIDER_MODULES': ['fbcrawl.spiders'], 'TELNETCONSOLE_ENABLED': False, 'USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36'} 2019-02-12 19:18:12 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.memusage.MemoryUsage', 'scrapy.extensions.feedexport.FeedExporter', 'scrapy.extensions.logstats.LogStats', 'scrapy.extensions.throttle.AutoThrottle'] 2019-02-12 19:18:12 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'fbcrawl.middlewares.FbcrawlDownloaderMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats', 'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware'] 2019-02-12 19:18:12 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'fbcrawl.middlewares.FbcrawlSpiderMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2019-02-12 19:18:12 [scrapy.middleware] INFO: Enabled item pipelines: ['fbcrawl.pipelines.FbcrawlPipeline'] 2019-02-12 19:18:12 [scrapy.core.engine] INFO: Spider opened 2019-02-12 19:18:12 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2019-02-12 19:18:12 [fb] INFO: Spider opened: fb 2019-02-12 19:18:12 [fb] INFO: Spider opened: fb 2019-02-12 19:18:13 [fb] INFO: Parse function called on https://mbasic.facebook.com/adamvarro 2019-02-12 19:18:17 [scrapy.core.engine] INFO: Closing spider (finished) 2019-02-12 19:18:17 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 2977, 'downloader/request_count': 6, 'downloader/request_method_count/GET': 5, 'downloader/request_method_count/POST': 1, 'downloader/response_bytes': 22145, 'downloader/response_count': 6, 'downloader/response_status_count/200': 3, 'downloader/response_status_count/302': 3, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2019, 2, 12, 18, 18, 17, 578890), 'httpcache/firsthand': 2, 'httpcache/hit': 4, 'httpcache/miss': 2, 'httpcache/store': 2, 'log_count/INFO': 10, 'memusage/max': 50184192, 'memusage/startup': 50184192, 'request_depth_max': 2, 'response_received_count': 3, 'scheduler/dequeued': 6, 'scheduler/dequeued/memory': 6, 'scheduler/enqueued': 6, 'scheduler/enqueued/memory': 6, 'start_time': datetime.datetime(2019, 2, 12, 18, 18, 12, 740586)} 2019-02-12 19:18:17 [scrapy.core.engine] INFO: Spider closed (finished)

Thank you very much and hope you can help me ..

rugantio commented 5 years ago

From the errors I can see that the you haven't updated the code, you have to either pull the repo or re-clone it to have the fix. fbcrawl doesn't work on profiles, so "adamvarro" and such are not scrapable

chelsas69 commented 5 years ago

Many thanks. Cloning the repository has solved the problem.

tridelt commented 4 years ago

@rugantio

I just noticed that the parser breaks while trying to process a timestamp... I will work on it asap, thx for the heads up

I am really impressed by your work! How did you figure out that this is the issue? Would love to know so I can learn more about debugging. :)