Open wyardley opened 2 years ago
Here's an example one.
Note:
Warnings:
- Incomplete lock file information for providers
To see the full warning notes, run Terraform without -compact-warnings.
Terraform has been successfully initialized!
â•·
│ Error: Required plugins are not installed
│
│ The installed provider plugins are not consistent with the packages
│ selected in the dependency lock file:
│ - registry.terraform.io/hashicorp/kubernetes: the cached package for registry.terraform.io/hashicorp/kubernetes 2.12.1 (in .terraform/providers) does not match any of the checksums recorded in the dependency lock file
│
│ Terraform uses external plugins to integrate with a variety of different
│ infrastructure services. You must install the required plugins before
│ running Terraform operations.
╵
is this still happening with v0.19.8
?
Hard to know since it's transient, but I would assume so. The more I think about it, the more I am convinced it's due to the plugin cache thing and the init happening in parallel.
I can think of some possible fixes, but they're beyond what I can implement. High level, though
terraform providers mirror
)Got this again just now:
Initializing the backend...
Successfully configured the backend "gcs"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Finding gavinbunney/kubectl versions matching "1.14.0"...
- Finding integrations/github versions matching "4.29.0"...
- Finding hashicorp/google versions matching ">= 4.18.0, 4.33.0"...
- Finding hashicorp/kubernetes versions matching "~> 2.10, 2.13.0"...
- Finding fluxcd/flux versions matching "0.16.0"...
- Finding hashicorp/google-beta versions matching ">= 4.29.0, < 5.0.0"...
- Finding latest version of hashicorp/random...
- Using hashicorp/google-beta v4.33.0 from the shared cache directory
- Using hashicorp/random v3.3.2 from the shared cache directory
- Using gavinbunney/kubectl v1.14.0 from the shared cache directory
- Using integrations/github v4.29.0 from the shared cache directory
- Using hashicorp/google v4.33.0 from the shared cache directory
- Installing hashicorp/kubernetes v2.13.0...
- Installed hashicorp/kubernetes v2.13.0 (signed by HashiCorp)
- Using fluxcd/flux v0.16.0 from the shared cache directory
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Warnings:
- Incomplete lock file information for providers
To see the full warning notes, run Terraform without -compact-warnings.
Terraform has been successfully initialized!
â•·
│ Error: Required plugins are not installed
│
│ The installed provider plugins are not consistent with the packages
│ selected in the dependency lock file:
│ - registry.terraform.io/hashicorp/kubernetes: the cached package for registry.terraform.io/hashicorp/kubernetes 2.13.0 (in .terraform/providers) does not match any of the checksums recorded in the dependency lock file
│
│ Terraform uses external plugins to integrate with a variety of different
│ infrastructure services. You must install the required plugins before
│ running Terraform operations.
╵
@jamengual
is this still happening with
v0.19.8
?
I just got it with v0.19.8 and terraform 1.2.9.
What I saw in Atlantis terraform output is it did not fetch all the providers.
We also have these Error: Required plugins are not installed
issues, and it still happening in v0.22.2
.
Is there a way to un-close this issue ?
The error shown by @wyardley
│ Error: Required plugins are not installed │ │ The installed provider plugins are not consistent with the packages │ selected in the dependency lock file: │ - registry.terraform.io/hashicorp/kubernetes: the cached package for registry.terraform.io/hashicorp/kubernetes 2.13.0 (in .terraform/providers) does not match any of the checksums recorded in the dependency lock file
Seems like that could be resolved by stomping over the .terraform.lock file prior to terraform init
or doing terraform init -upgrade
.
Is that the same issue you're hitting @vmdude ?
.lock files are not versioned (through git I mean) on our side, and we're using the same cache dir for all parallel plans (same as issue creator).
Removing .lock file in the pre_workflow_hooks
could not cause another race condition where we removing a lock file used by another parallel plan ?
I'm curious if you get the error if you run terraform init
with the extra args -upgrade
to stomp over the lock files on every run
Let me check and try (all parallel plan run in a few hours) and I'll get back to you with the output.
We get it during first plan after version updates, and will go away on second plan. So pretty confident this is because of terraform’s known issues with init with shared cache not being parallel safe.
Having a small amount of splay would be one fix that would probably help, though not sure if someone would be willing to implement that.
btw, we don’t use a checked in lockfile, and do have -upgrade
in the init args
@wyardley have you tried removing the .terraform.lock.hcl
file and running a terraform init -upgrade
?
Do you folks get stack traces in your logs?
@nitrocode we don’t use or check in the lockfile. But see linked issue - I believe this has everything to do with tf not working well with parallel init. Once the new version is in the provider cache, the failure will not happen.
@nitrocode We have not yet be able to reproduce these errors (and get stack information) as they don't appear every time. We'll keep you posted when they do
Should be reproducible if you clear the plugin cache or update a version of a provider that exists in multiple states that are being planned in parallel.
Once the provider cache is populated, the issue should not come up.
There are some changes coming in 1.4 that might make this worse in the case that the lockfile is not checked in.
It seems that parallel planning/applying is enabled, each of their terraform inits impact each other.
Some options for contributors
Please let me know if i missed any options. As always, we welcome prs
For (1)
Here is the current code https://github.com/runatlantis/atlantis/blob/890a5f7502e4f9abac0c4aae253490f6989e0c8a/server/events/project_command_pool_executor.go#L13-L40
Here is a possible change
}
+ time.Sleep(1 * time.Second)
go execute()
}
That should at least start each job with a second in between.
Or perhaps the first job can start then pause for a couple seconds to ensure the init stage has passed and then the subsequent jobs can start all at once?
@nitrocode yeah, agree. some kind of configurable (e.g., ATLANTIS_PARALLEL_PLAN_SPLAY
or ATLANTIS_PARALLEL_PLAN_SLEEP
/ non-configurable (or even random) sleep could help a lot. Similarly, with parallel planning some big states, we sometimes see the pod Atlantis is running on crash from resource exhaustion.
Couple issues with option 2 (staggering the runs) is that
It might be something we could solve in the init
step. Below is untested code.
workflows:
default:
init:
steps:
- run: /usr/local/bin/terraform_init_retry
/usr/local/bin/terraform_init_retry
#!/usr/bin/env sh
declare -i attempt=0
max_attempts=10
# until this works
until terraform$ATLANTIS_TERRAFORM_VERSION init -no-color; do
attempt+1
# check error code
if [ $? -gt 0 ]; then
if [ $attempt -le $max_attempts ]; then
echo "$attempt / $max_attempts: Error thrown. Rerunning init"
else
echo "$attempt / $max_attempts: giving up"
exit $?
fi
else
# zero error code break. May not be required
break
fi
done
This can be set to be inside the working directory unique to the workspace. This would ensure that the provider cache would be isolated per run.
workflows:
default:
init:
steps:
- env:
name: TF_PLUGIN_CACHE_DIR
command: 'echo "$(pwd)/.terraform-cache-$WORKSPACE" '
plan:
steps:
- env:
name: TF_PLUGIN_CACHE_DIR
command: 'echo "$(pwd)/.terraform-cache-$WORKSPACE" '
apply:
steps:
- env:
name: TF_PLUGIN_CACHE_DIR
command: 'echo "$(pwd)/.terraform-cache-$WORKSPACE" '
What's odd about option 4 is that it's the default behavior to cache the providers in the .terraform
directory already if a TF_PLUGIN_CACHE_DIR
is unset.
Regarding terraform 1.4 and the new kocking behavior, it seems that hashicorp has added a new flag that needs to be set to retain 1.3.x and below behavior in 1.4.x+.
TF_PLUGIN_CACHE_MAY_BREAK_DEPENDENCY_LOCK_FILE=true
If it were a cache per state, you wouldn’t need the new flag - it just helps avoid redownloading when there’s no lockfile and the cache is already populated.
I would guess users that both don’t check in a lockfile and set TF_PLUGIN_CACHE_DIR
would want to set that flag once upgrading to 1.4.x
I agree with you that it’s odd that this issue comes up at all, if Atlantis doesn’t already do something to encourage terraform to share a cache directory, and especially since I think the .terraform
directory would also be new / unique per PR, per state.
for those who are still having this issue. There is now a setting that disables the plugin cache (thanks to #3720 !), so option 4 in https://github.com/runatlantis/atlantis/issues/2412#issuecomment-1455082428 is not needed anymore (but thank you for the solution!)
https://www.runatlantis.io/docs/server-configuration.html#use-tf-plugin-cache
Community Note
Overview of the Issue
I'm occasionally getting some transient errors when running
atlantis plan
; currently, I haveATLANTIS_PARALLEL_POOL_SIZE
set to3
. It's most typically on states that have a lot of providers, giving me two possible theories:Using integrations/foo vN.N.N from the shared cache directory
in the output; see link below for why this is not guaranteed to be safe)I'm not able to reproduce it right at the moment, and don't have the exact error handy, so I'll update here next time the issue comes up
Reproduction Steps
atlantis plan
Note: this is not consistently reproducible
Logs
n/a
Environment details
Atlantis server-side config file: All config is from env vars / flags (with some kube secret references / other irrelevant stuff omitted)
Repo
atlantis.yaml
file:Any other information you can provide about the environment/deployment. --->
Additional Context