Closed thiswillbeyourgithub closed 2 years ago
Good point, although this could easily be addressed by the script that is calling the downloader (I'm assuming you're not starting the script 100x by hand). I don't really think it's necessary to include this logic in the comment downloader itself.
Thanks for the quick answer.
I disagree slightly because I think it can be helpful to others as it's pretty easy to never think about that and never notice it.
To me this seems worth the 2 extra lines, but your call :).
Hi,
I've been using your script quite a lot recently, great work for this huge time saver!
I'm noticing something that could be a real issue though : the current script stores the output continuously (each retrieved comment is stored to the file directly). This means that if I'm running your script on 100 channels and have an error, I can't resume efficiently because I don't know which scraping was finished.
I think the best fix would be to store the comments in
{output}.temp
and add a last line to rename{output}.temp
to{output}
Have a nice day!