Closed dploeger closed 4 years ago
The key aspect here is whether you are creating the azurerm cluster resource in the same apply
run as the kubernetes resources?
In the same apply run. Sometimes the azure cluster already existed, and sometimes not (and was created by the apply run).
I experience similar issues with this setup:
Interestingly, all works well if I run terraform apply
locally/on any machine but once I run it in our CI (GitLab CI and/or Jenkins), I run into the same issue that this provider does not pick up the RKE configuration but instead dials localhost port 80.
For CI we use cytopia:terragrunt
(clean run without any caches).
fyi, my problem was also related to https://github.com/terraform-providers/terraform-provider-kubernetes/issues/708#issuecomment-598122673
My interesting observation was though:
load_config_file
in the kubernetes and helm providerload_config_file
to false
in both the kubernetes and helm provider, it was workingCould someone explain me why in one case it's necessary to set load_config_file = false
and in the other case with an already existing kubeconfig file it isn't? Furthermore, it seems as if the kubeconfig values would get overwritten anyways.
Has same issue with version "1.11.2". Solved following way:
Downgraded to 1.10.0 and received error:
Error: Failed to configure: username/password or bearer token may be set, but not both
Removed "username/password" and left only "client_certificate/client_key/cluster_ca_certificate"
Problem solved and Now everything works with "1.11.2".
Enjoy.
I have the issue with 1.11.4
on EKS. It's like the provider is initialized with default settings even if in my module I'm using the credentials from the EKS cluster. I found no workaround to the issue. This is really frustrating.
I validate that I do not have any other kubernetes provider set that could override. I still unsure but that could be related to the fact that I'm using terragrunt :shrug:
I just tried reverting to 1.10.0 version of the provider. It worked. I managed to create the resources but next plan failed with:
Error: namespaces "my_namespace" is forbidden: User "system:anonymous" cannot get resource "namespaces" in API group "" in the namespace "my_namespace"
I guess it is related to EKS rbac but how is it possible to not use anonymous user without a kube config?
I managed to make it work with
provider "kubernetes" {
version = "~> 1.11.0"
load_config_file = false
host = aws_eks_cluster.eks.endpoint
cluster_ca_certificate = base64decode(aws_eks_cluster.eks.certificate_authority.0.data)
exec {
api_version = "client.authentication.k8s.io/v1alpha1"
args = ["token", "-i", aws_eks_cluster.eks.name, "-r", "arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/MyRole"]
command = "aws-iam-authenticator"
}
}
I think I understand what is happening here.
data "aws_eks_cluster_auth" "cluster" {
name = module.eks.cluster_id
}
Getting the token on each provider call as above solution works just fine.
I have the same issue when using Kubernetes Provider > 1.10 (maybe related to https://github.com/hashicorp/terraform-provider-kubernetes/issues/759). Using Provider Version 1.10.0 works as expected. 1.11 and 1.12 do not work with the following config running inside a Kubernetes Cluster:
KUBE_LOAD_CONFIG_FILE=false
KUBERNETES_SERVICE_HOST=<k8s-host>
KUBERNETES_SERVICE_PORT=443
Steps to reproduce:
Results in Error: Post "http://localhost/api/v1/namespaces/default/secrets": dial tcp 127.0.0.1:80: connect: connection refused
provider "kubernetes" {
version = "~> 1.11"
}
resource "kubernetes_secret" "test" {
metadata {
name = "test"
namespace = "default"
}
data = {
test = "data"
}
}
I tried to configure the Kubernetes Provider using load_config_file
and KUBE_LOAD_CONFIG_FILE
. Enabling debug
shows the following: [WARN] Invalid provider configuration was supplied. Provider operations likely to fail: invalid configuration: no configuration has been provided
@etwillbefine I wasn't able to reproduce the issue with the configuration you provided. I ran a test inside a Debian container in a Pod on a 1.18 cluster and it worked as expected for me. See the output below.
root@test-708:/test-708# env | grep KUBERNETES | sort
KUBERNETES_PORT=tcp://10.3.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.3.0.1:443
KUBERNETES_PORT_443_TCP_ADDR=10.3.0.1
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_SERVICE_HOST=10.3.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
root@test-708:/test-708# cat main.tf
provider "kubernetes" {
version = "~> 1.11"
load_config_file = "false"
}
resource "kubernetes_namespace" "test" {
metadata {
name = "test"
}
}
root@test-708:/test-708# terraform init
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/kubernetes versions matching "~> 1.11"...
- Installing hashicorp/kubernetes v1.13.1...
- Installed hashicorp/kubernetes v1.13.1 (signed by HashiCorp)
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
root@test-708:/test-708# terraform version
Terraform v0.13.2
+ provider registry.terraform.io/hashicorp/kubernetes v1.13.1
root@test-708:/test-708# terraform apply -auto-approve
kubernetes_namespace.test: Creating...
Error: namespaces is forbidden: User "system:serviceaccount:default:default" cannot create resource "namespaces" in API group "" at the cluster scope
on main.tf line 6, in resource "kubernetes_namespace" "test":
6: resource "kubernetes_namespace" "test" {
I'm going to close this issue as it's become a catch-all for credentials misconfigurations. Please open separate issues if you're having trouble with configuring credentials so we can address them specifically.
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!
Terraform Version
Terraform v0.12.17
Affected Resource(s)
Please list the resources as a list, for example:
Terraform Configuration Files
Debug Output
(The debug output is huge and I just pasted a relevant section of it. If you need more, I'll create a gist)
Expected Behavior
When running terraform in the
hashicorp/terraform
container, aterraform plan
should run properlyActual Behavior
The plan errors out with the following error:
This only happens, when running terraform in the container. When ran locally, everything is fine. (Even when the local .kube directory is removed)
Steps to Reproduce
Please list the steps required to reproduce the issue, for example:
terraform plan
orterraform apply
Important Factoids
hashicorp/terraform
image