Closed robertd closed 2 years ago
Your cluster administrator or a role with access to the cluster needs to give you access for your role. These roles are configured in the aws-auth configmap. By default, the identity that created the cluster always has access.
@ellistarn Onto the next issue...
2022-02-09T21:38:01.417Z ERROR controller.provisioning Could not launch node, launching instances, getting launch template configs, getting launch templates, no security groups exist given constraints {"commit": "df57892", "provisioner": "default"}
I have a feeling it's related to no security groups exist given constraints
as I'm not seeing anywhere in the guide being set and it's a required field. Which security group is this? Of a cluster... or EC2 instance? And where is being set in the guide?
Thanks in advance!
P.S. For subnetSelector
I had to steer away from karpenter.sh/discovery: clusterName
tag format and stick with older tagging structure karpenter.sh/discovery/clusterName: '*'
because we're using a shared VPC with other groups within the same account.
On the other hand... I'm having a hard time figuring out securityGroupSelector
(karpenter.sh/discovery: 'clusterName'
). Docs are suggesting that if this is not passed it will try to figure it out ( If no security groups are explicitly listed, Karpenter discovers them using the tag “kubernetes.io/cluster/MyClusterName”, similar to subnet discovery.
). However... I wasn't able to create a provisioner omitting this or subnet parameter. I get errors.
Is it safe to assume that the all provider configuration parameters are considered required? https://karpenter.sh/v0.6.1/aws/provisioning/
I found out that cluster SG has the following tags... (I believe instruction guide should be updated with correct tag example)
I've updated provider
section in my provisioner in manifest in my CDK code....
Unfortunately... I'm still getting an error... so I'm still investigating... :(
2022-02-09T22:20:23.564Z ERROR controller.provisioning Could not launch node, launching instances, getting launch template configs, getting launch templates, no security groups exist given constraints {"commit": "df57892", "provisioner": "default"}
According to docs... launch template is optional?... or is it?
We recommend that you allow karpenter to manage your launch templates for you. If you need additional configuration of LTs that isn't yet supported, you can use a custom launch template.
@ellistarn I've commented out launchTemplate
portion but I'm still getting this error...
Since this ticket was pivoted from a question, do you mind providing the standard info from our issue template. Alternatively, it might be easier to open a new issue. This is normally what we ask for to be able to successfully debug issues like this. Most relevant in this case is Resource Specs and Logs
Karpenter: v0.0.0
Kubernetes: v1.0.0
@ellistarn I didn't see questions
template so I picked this...
I'll most likely close this issue as this is kinda of a run-off/troubleshooting at this point.
Ah that makes complete sense.
@ellistarn User error on my part... my CDK got stale and it wasn't pushing manifest changes (i.e. securityGroupSelector
updates) 🤣
Basically I was working on new issue with proper template... and then I pasted this snippet...
That's where I figured out that CDK was not watching for changes and deployments were not really updating anything lol.
We're rolling now!!!!
My next step is to try LT with bottlerocket AMI :D
@ellistarn Thank you so much for all your help!!! <3
Glad you sorted it out! It definitely gets a bit confusing with k8s style (level triggered reconciliation) and cfn/cdk style (edge triggered updating). If you haven't heard about this before, there's a great blog: https://hackernoon.com/level-triggering-and-reconciliation-in-kubernetes-1f17fe30333d
ERROR controller.provisioning Could not launch node, launching instances, getting launch template configs, getting launch templates, no security groups exist given constraints {"commit": "df57892", "provisioner": "default"}
For future reference, I was getting the error above whist trying to install Karpenter 0.30.0
using aws-ia/eks-blueprints-addons/aws
terraform module 1.7.2
. Managed to fix the problem with the following node template configuration:
resource "kubectl_manifest" "karpenter_node_template" {
yaml_body = <<-YAML
apiVersion: karpenter.k8s.aws/v1alpha1
kind: AWSNodeTemplate
metadata:
name: default
spec:
subnetSelector:
"kubernetes.io/cluster/${module.eks.cluster_name}": "owned"
securityGroupSelector:
"kubernetes.io/cluster/${module.eks.cluster_name}": "owned"
instanceProfile: ${module.karpenter.instance_profile_name}
tags:
"karpenter.sh/discovery": "${module.eks.cluster_name}"
YAML
depends_on = [
helm_release.karpenter
]
}
Hello,
I've been having issues following the guide. Unfortunately, due to nature of my VPC I cannot use
eksctl
for eks cluster creation. Instead I'm using CDK construct (https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_eks-readme.html) which creates a simple EKS cluster with 1 node.Any ideas? Thanks in advance.