Closed henare closed 7 years ago
The way I've fixed this is to just copy the run IDs of stopped containers from the admin interface and then put them in an array and run them like so:
[599154, 599155].each{|id| RunWorker.perform_async(id)}
We're not really seeing this anymore because the regular docker prune -af
is destroying all those containers. So when the job gets run again a new container gets created and they don't get stuck (as I understand it).
I'm going to close this, reopen if you think otherwise :)
the regular docker prune -af is destroying all those containers
That really doesn't sound like we want it to be doing that. That means a scraper can run and finish OK and while the background worker is finishing up the container can be deleted, which is obviously not a good thing.
@auxesis can you confirm this is the expected behaviour? If so please open a new issue to fix it because of the problem I describe above.
We're not really seeing this anymore because the regular docker prune -af is destroying all those containers. So when the job gets run again a new container gets created and they don't get stuck (as I understand it).
We're switched off those prunes and are seeing a buildup of stopped containers with no jobs again, so I'm reopening this.
Was this fixed in 99605e2bc477cca995b5594cd118994cb3519172?
There are currently lots of stopped scrapers. This means there's no slots available until they finish. They can be cleaned up by creating jobs for them but there's no easy way to do this.
It's worth noting also that these stopped container jobs did not show up when running the
app:emergency:show_queue_run_inconsistencies
rake task.