This morning during a run, when the CSV writer was executing a program exception was thrown. This was not caught by scraper code, it was caught by the greenlet code, which terminated the task. The terminated task was the backend task, which meant that all of the database requests were queued up, which balloned the memory usage of the process. This behaviour is wrong.
Modules in the scraper should terminate when code exceptions occur or when other drastic events occur. They should report the event as part of the termination process.
This morning during a run, when the CSV writer was executing a program exception was thrown. This was not caught by scraper code, it was caught by the greenlet code, which terminated the task. The terminated task was the backend task, which meant that all of the database requests were queued up, which balloned the memory usage of the process. This behaviour is wrong.
Modules in the scraper should terminate when code exceptions occur or when other drastic events occur. They should report the event as part of the termination process.