Closed SofyaMa closed 4 years ago
Sorry for the late response. A quick workaround is to run multiple queries setting different -bd
and -ed
(begin/end date) parameters and do independent runs. twitterscraper is not memory efficient. If you have more questions, please reopen or start a new issue.
Dear all,
I do not really know whether this problem can be solved somehow. My kernel dies whenever I try to scrape more than 100 000 tweets at a time. The problem is, that for my research project in university I need to analyze about 2.2 mio. tweets. Does anybody have suggestions how I can sort this issue out besides buying a new computer :)? Or could any body tell me what the limiting factor is? Thanks a lot!!