Queens-Hacks / qcumber-scraper

Scrapes SOLUS and generates structured data
3 stars 6 forks source link

Use Daemon threads #4

Open mystor opened 10 years ago

mystor commented 10 years ago

In the current master branch, the scraper uses standard non-daemon threads and then uses .join() on each one to prevent the program from ending.

I think it would be better to use daemon threads (daemon threads die when the parent thread dies) and then use .join() on the queue. We then never have to worry about the threads ending (they can be in an infinite loop reading from the queue), and yet they will still be cleaned up when the scraper exits.

pR0Ps commented 10 years ago

The reason I used join() on the threads instead of the queue is that If there is an error with credentials, the thread will exit without consuming any jobs. If join() was used on the queue instead of the threads then the program will lock up if there is an authentication error (all the threads would exit, but there would still be stuff in the queue).

I prefer to explicitly wait until all the threads are finished (which will only happen when the queue is empty or the threads can't log in), rather than trust that the threads will automatically be killed when the scrape is complete (which isn't guaranteed to ever happen). Is there any advantage to doing it with daemon threads?

mystor commented 10 years ago

I just feel like it would be nice to in the future have the ability to add things to the queue without having to worry whether all of the workers have gone and died.

I realize that there are more hurdles that I don't cover in this pull request. We should probably stick with your system for now. On 2013-12-19 11:23 AM, "Carey Metcalfe" notifications@github.com wrote:

The reason I used join() on the threads instead of the queue is that If there is an error with credentials, the thread will exit without consuming any jobs. If join() was used on the queue instead of the threads then the program will lock up if there is an authentication error (all the threads would exit, but there would still be stuff in the queue.

Plus, I don't think daemon threads work like that. According to the docs: "The entire Python program exits when no alive non-daemon threads are left". I've done a lot of work with threads in the past and getting the damn things to stop and exit has always been an issue.

I prefer to explicitly wait until all the threads are finished (which will only happen when the queue is empty or the threads can't log in), rather than trust that the threads will automatically be killed when everything is done.

— Reply to this email directly or view it on GitHubhttps://github.com/Queens-Hacks/qcumber-scraper/issues/4#issuecomment-30942399 .