Open cot-victor opened 9 months ago
This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
/remove-kind bug
You can use this https://kubernetes.github.io/ingress-nginx/user-guide/k8s-122-migration/#how-can-i-easily-install-multiple-instances-of-the-ingress-nginx-controller-in-the-same-cluster to deploy 2 controllers in one cluster and dedicate one of them for internal while the other one is dedicated to external
@longwuyuan, thank you for your replay. Due to the limited resources we have, we're restricted to use only one ingress controller per cluster.
Well, the chart and the controller install is not designed to specify 2 certs during install.
If it was one certificate that has both the internal as well as the external domains in the subject and alternate subject names, then the chart has a spec for configuring the default ssl cert of that controller instance
Can you be more specific about what is not working? The internal load balancer is not getting the ACM cert attached to it? Are there any logs in AWS, is the ACM certificate validated? That is all part of the cloud controller and not ingress-nginx controller.
/triage needs-information
@strongjz
is the ACM certificate validated
Yes, it is validated and is currently successfully used in other mechanisms.
The internal load balancer is not getting the ACM cert attached to it?
Exactly, similarly to the external service, we're trying to inject into the internal one another certificate. See below config:
USER-SUPPLIED VALUES:
controller:
service:
annotations:
service:
beta:
kubernetes:
io/aws-load-balancer-backend-protocol: http
io/aws-load-balancer-connection-idle-timeout: 3600
io/aws-load-balancer-ssl-cert: external_acm_arn
io/aws-load-balancer-ssl-ports: https
internal:
annotations:
service:
beta:
kubernetes:
io/aws-load-balancer-backend-protocol: http
io/aws-load-balancer-connection-idle-timeout: 3600
io/aws-load-balancer-scheme: internal
io/aws-load-balancer-ssl-cert: internal_acm_arn
io/aws-load-balancer-ssl-ports: https
enabled: true
This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach #ingress-nginx-dev
on Kubernetes Slack.
What happened: Hello! We want to serve via a single ingress nginx controller two different domains. Due to some issues in our infrastructure, we can't set up certificates directly on the Ingress level, so we're left with the only solution which is to deploy helm chart with the default (external) service and by enabling the internal one - which would raise another LB through which we plan to serve the second domain ingresses. Certificate is loaded via feeding the chart with annotations like below for external one (which works):
But when it comes to the internal one, similarly is not working to load ACM certificate into the LB:
What you expected to happen: Internal LB to have ACM certificate attached on
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.): 4.8.3
Kubernetes version (use
kubectl version
): Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.2", GitCommit:"7f6f68fdabc4df88cfea2dcf9a19b2b830f1e647", GitTreeState:"clean", BuildDate:"2023-05-17T14:20:07Z", GoVersion:"go1.20.4", Compiler:"gc", Platform:"darwin/arm64"} Kustomize Version: v5.0.1 Server Version: version.Info{Major:"1", Minor:"27+", GitVersion:"v1.27.8-eks-8cb36c9", GitCommit:"fca3a8722c88c4dba573a903712a6feaf3c40a51", GitTreeState:"clean", BuildDate:"2023-11-22T21:52:13Z", GoVersion:"go1.20.11", Compiler:"gc", Platform:"linux/amd64"} Environment:Cloud provider or hardware configuration: AWS
OS (e.g. from /etc/os-release): Amazon Linux 2
Kernel (e.g.
uname -a
): 5.10.179-168.710.amzn2.x86_64Install tools: EKS managed cluster
Please mention how/where was the cluster created like kubeadm/kops/minikube/kind etc.
Basic cluster related info:
kubectl version
kubectl get nodes -o wide
How was the ingress-nginx-controller installed:
If helm was used then please show output of
helm ls -A | grep -i ingress
ingress kube-system 7 2024-01-03 16:01:24.586092661 +0000 UTC deployed ingress-nginx-4.8.3 1.9.4
If helm was used then please show output of
helm -n <ingresscontrollernamespace> get values <helmreleasename>
If helm was not used, then copy/paste the complete precise command used to install the controller, along with the flags and options used
if you have more than one instance of the ingress-nginx-controller installed in the same cluster, please provide details for all the instances
Current State of the controller:
kubectl describe ingressclasses
Name: nginx Labels: app.kubernetes.io/component=controller app.kubernetes.io/instance=ingress app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=ingress-nginx app.kubernetes.io/part-of=ingress-nginx app.kubernetes.io/version=1.9.4 helm.sh/chart=ingress-nginx-4.8.3 Annotations: meta.helm.sh/release-name: ingress meta.helm.sh/release-namespace: kube-system Controller: k8s.io/ingress-nginx Events: