Open dbolack opened 4 years ago
Hmm. I'm not sure how best to deal with this in Atlantis. I don't think it makes sense to build disk space checking into Atlantis. I think this is best dealt with via your own health checking.
Could we at least have a status or healthcheck endpoint that would report available disk and that atlantis can run? Or just add available disk to the /status
output?
Sure, adding it to the /status endpoint makes sense.
As a workaround, this pre_workflow_hook
has been helping me keep my volumes clean
https://github.com/runatlantis/atlantis/issues/3238#issuecomment-1869094337
We run our atlantis instance on a (now clear too) small AWS instance. As we have multiple sets of states being managed and use private github repo for modules, when might have a much larger cache and plan scratch space than most.
We have found that when dis space is close enough to full we end up getting plan failures that frankly, make no sense, and some form of corruption in the plugin cache. The only time there was a clue was when a provider needed to be grabbed for the plan.
I'm not entirely certain how to describe reproducing other than to say full up the disk and run a big plan with multiple providers,
Example error: