Open SolomonShorser-OICR opened 9 years ago
Another issue I've discovered here is that the workers that provisioned OK might actually have enough time to pull a job from the queue before they are reaped. So you could have scenarios where your job queue is draining but no work is getting done because the entire fleet is killed when one or two of them fail to provision. Maybe instead of killing the fleet at the end, would it be possible to do it at the beginning when a failure with one node is detected? Ideally, only the failed node should be reaped, but I realize that might be difficult to do (it would probably involve parsing the text output of ansible).
When the Provisioner launches several VMs in a single batch and ansible fails to provision only one of them (such as SSH timeout when connecting), the entire batch will be killed at the end of the playbook because the playbook returns a non-zero error code even when only one VM fails. This is less than ideal when provisioning takes a while and there are large batches being provisioned at a time.
(was originally created on Consonance, but that was the wrong place: https://github.com/Consonance/consonance/issues/97)