Open Nuru opened 1 year ago
I also have the same issue. I would like to run the controllers on fargate, and have them attach EFS volumes to actual nodes that are then provisoned by karpenter.
securityContext.privileged: true
for controller pods. This isn't supported by fargate nodes.Please reopen.
/reopen
@z0rc: You can't reopen an issue/PR unless you authored it or you are a collaborator.
@Nuru could you reopen the ticket please?
/reopen
It looks like the changes in #1195 were necessary, but not sufficient.
@Nuru: Reopened this issue.
Just fall in the same situation, can't deploy the add-on because kube-system is a fargate namespace. Same context = Karpenter + FargateCluster Will switch on the manual installation mode, but that seem a waste of time. Allow controllers to run on fargate would be great, thanks
We're facing the same issue as previous commenter
Apologies for the delay in getting back. Our team is currently addressing this issue and will provide a solution soon. Thank you for your patience.
@mskanth972 is there any ETA when it will be available? I see a new 2.0.2 addon released but no option to set privileged to false
for the controller.
@skraga I have the PR ready, We will merge this and release in the upcoming version with Addons also. ECD will be by END of this Month.
@mskanth972 Thanks for your reply. Moreover, when we were considering the EKS addon for our use case we found out that it was not possible to set resource requests and limits.
@mskanth972
I have the https://github.com/kubernetes-sigs/aws-efs-csi-driver/pull/1348 ready, We will merge this
PR is closed without merge and explanation.
ECD will be by END of this Month.
End of May passed, no updates on the issue. Please share what's the current state of this issue and plans to address it.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
/kind bug
What happened?
app = "efs-csi-controller"
, so that the EFS controller would be launched to Fargate.efs-csi-node
Daemonset successfully deployed to the EC2 nodes, but theefs-csi-controller
Pods were still in a CrashLoopBackoff and the Add-On still reports status as "Degraded"..What you expected to happen?
The controller pods would be deployed to Fargate and and work without the Node component, and the Add-On would report status as "Active". As EC2 Nodes were provisioned, controller Pods would work from Fargate while Node Pods worked properly on EC2 Nodes.
How to reproduce it (as minimally and precisely as possible)?
See "What happened" above.
Anything else we need to know?:
The failure that is reported to Kubernetes comes from the
efs-plugin
container exiting with an error. IMHO it should not try to run on Fargate, and probably should not be deployed as part of the controller for this reason.Environment
kubectl version
): v1.27.4-eks-2d98532Please also attach debug logs to help us better diagnose
Log excerpts (each one just keeps repeating the quoted excerpt):
efs-csi-controller csi-provisioner
efs-csi-controller liveness-probe
efs-csi-controller efs-plugin