Author Name: Alex Norton
Original Redmine Issue: 597, http://www.fossology.org/issues/597
Original Date: 2012/01/27
Original Assignee: Alex Norton
Some agents don't update the number of item processed quickly and for large uploads will go a very long time without changing this number. This results in the scheduler choosing to kill the agent because it believes that the agent has entered an infinite loop. There should be a way for an agent to turn off this feature when the scheduler deals with it or for the agent to correctly be able to update the number of items processed.
Adj2nest is currently the only agent that is having this problem and is currently using a dummy fo_scheduler_heart(1) to keep the scheduler from killing it when it is processing large distributions. This work around it not ideal and should be replace with a more long term solution.
Author Name: Alex Norton Original Redmine Issue: 597, http://www.fossology.org/issues/597 Original Date: 2012/01/27 Original Assignee: Alex Norton
Some agents don't update the number of item processed quickly and for large uploads will go a very long time without changing this number. This results in the scheduler choosing to kill the agent because it believes that the agent has entered an infinite loop. There should be a way for an agent to turn off this feature when the scheduler deals with it or for the agent to correctly be able to update the number of items processed.
Adj2nest is currently the only agent that is having this problem and is currently using a dummy fo_scheduler_heart(1) to keep the scheduler from killing it when it is processing large distributions. This work around it not ideal and should be replace with a more long term solution.