Closed flyersa closed 2 years ago
Hmm, I have not seen this happening yet and I do create nginx often. Neither should the script produce an error nor should the admission webhook trouble happen. Let me try to reproduce ...
Hmmm, it may not be straight-forward to reproduce :-( Here's a 3+3 v1.21.9 cluster I deployed without nginx and now installing it:
ubuntu@capi2-mgmtcluster:~ [0]$ apply_nginx_ingress.sh democluster
Deploy NGINX ingress controller to democluster
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 19023 100 19023 0 0 74308 0 --:--:-- --:--:-- --:--:-- 74308
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
configmap/ingress-nginx-controller created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
serviceaccount/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ubuntu@capi2-mgmtcluster:~ [0]$ k --context=democluster-admin@democluster get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
cert-manager cert-manager-58dff77b6d-wb8ps 1/1 Running 1 30m
cert-manager cert-manager-cainjector-57d549777c-sfmdk 1/1 Running 1 30m
cert-manager cert-manager-webhook-795c6b46b-9682w 1/1 Running 0 30m
ingress-nginx ingress-nginx-admission-create-tk2tq 0/1 ContainerCreating 0 6s
ingress-nginx ingress-nginx-admission-patch-fcvgt 0/1 ContainerCreating 0 6s
ingress-nginx ingress-nginx-controller-74cb6699df-q8856 0/1 ContainerCreating 0 6s
Checking back later, the -admissions are Completed
, the controller Running
and an OpenStack LoadBalancer has been created.
i can also give you access to our project for you to check if it makes things easier?
On Mon, Mar 7, 2022 at 11:25 AM Kurt Garloff @.***> wrote:
Hmmm, it may not be straight-forward to reproduce :-( Here's a cluster I deployed without nginx and now installing it:
@.:~ [0]$ apply_nginx_ingress.sh democluster Deploy NGINX ingress controller to democluster % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 19023 100 19023 0 0 74308 0 --:--:-- --:--:-- --:--:-- 74308 namespace/ingress-nginx created serviceaccount/ingress-nginx created configmap/ingress-nginx-controller created clusterrole.rbac.authorization.k8s.io/ingress-nginx created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created role.rbac.authorization.k8s.io/ingress-nginx createdrolebinding.rbac.authorization.k8s.io/ingress-nginx created service/ingress-nginx-controller-admission created service/ingress-nginx-controller created deployment.apps/ingress-nginx-controller createdingressclass.networking.k8s.io/nginx createdvalidatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created serviceaccount/ingress-nginx-admission createdclusterrole.rbac.authorization.k8s.io/ingress-nginx-admission createdclusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission createdrole.rbac.authorization.k8s.io/ingress-nginx-admission createdrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created job.batch/ingress-nginx-admission-create created job.batch/ingress-nginx-admission-patch created @.:~ [0]$ k @.*** get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE cert-manager cert-manager-58dff77b6d-wb8ps 1/1 Running 1 30m cert-manager cert-manager-cainjector-57d549777c-sfmdk 1/1 Running 1 30m cert-manager cert-manager-webhook-795c6b46b-9682w 1/1 Running 0 30m ingress-nginx ingress-nginx-admission-create-tk2tq 0/1 ContainerCreating 0 6s ingress-nginx ingress-nginx-admission-patch-fcvgt 0/1 ContainerCreating 0 6s ingress-nginx ingress-nginx-controller-74cb6699df-q8856 0/1 ContainerCreating 0 6s
Checking back later, the -admissions are Completed, the controller Running and an OpenStack LoadBalancer has been created.
— Reply to this email directly, view it on GitHub https://github.com/SovereignCloudStack/k8s-cluster-api-provider/issues/151#issuecomment-1060451230, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABBJ776PJFLCCZYZL44AU4DU6XKSZANCNFSM5QBRZ3IA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you authored the thread.Message ID: @.*** .com>
-- GmbH
Enrico Kern, Geschäftsführer* Südstraße 26 01877 Demitz-Thumitz
https://www.stackxperts.com @.*** +49 (0) 152 26814501 skype: flyersa
Geschäftsführer Enrico Kern Amtsgericht Dresden | HRB 38717
LB was also created for me. but the nginx ingress containers keep staying in crashloop with error it could not retrieve secrets of some sort.
I can recreate the script error. We miss the path in clusterctl-${CLUSTER_NAME}.yaml
. Expect PRs later today.
Ok i tested with the change from yesterday and apply_nginx_ingress.sh is working now. However on create_cluster.sh it is stil not creating the ingress when initially deploying the cluster.
The PodTemplateSpec does not look objectionable to me:
core.PodTemplateSpec {
ObjectMeta:v1.ObjectMeta {
Name:"ingress-nginx-admission-create",
GenerateName:"",
Namespace:"",
SelfLink:"",
UID:"",
ResourceVersion:"",
Generation:0,
CreationTimestamp:v1.Time {
Time:time.Time {
wall:0x0,
ext:0,
loc:(*time.Location)(nil)
}
},
DeletionTimestamp:(*v1.Time)(nil),
DeletionGracePeriodSeconds:(*int64)(nil),
Labels:map[string]string {
"app.kubernetes.io/component":"admission-webhook",
"app.kubernetes.io/instance":"ingress-nginx",
"app.kubernetes.io/managed-by":"Helm",
"app.kubernetes.io/name":"ingress-nginx",
"app.kubernetes.io/part-of":"ingress-nginx",
"app.kubernetes.io/version":"1.0.1",
"controller-uid":"09232ee3-59ed-4aec-89d1-ca347a9fcf92",
"helm.sh/chart":"ingress-nginx-4.0.2",
"job-name":"ingress-nginx-admission-create"
},
Annotations:map[string]string(nil),
OwnerReferences:[]v1.OwnerReference(nil),
Finalizers:[]string(nil),
ClusterName:"",
ManagedFields:[]v1.ManagedFieldsEntry(nil)
},
Spec:core.PodSpec {
Volumes:[]core.Volume(nil),
InitContainers:[]core.Container(nil),
Containers:[]core.Container {
core.Container {
Name:"create",
Image:"k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.0@sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068",
Command:[]string(nil),
Args:[]string {
"create",
"--host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc",
"--namespace=$(POD_NAMESPACE)",
"--secret-name=ingress-nginx-admission"
},
WorkingDir:"",
Ports:[]core.ContainerPort(nil),
EnvFrom:[]core.EnvFromSource(nil),
Env:[]core.EnvVar {
core.EnvVar {
Name:"POD_NAMESPACE",
Value:"",
ValueFrom:(*core.EnvVarSource)(0xc01007e6a0)
}
},
Resources:core.ResourceRequirements {
Limits:core.ResourceList(nil),
Requests:core.ResourceList(nil)
},
VolumeMounts:[]core.VolumeMount(nil),
VolumeDevices:[]core.VolumeDevice(nil),
LivenessProbe:(*core.Probe)(nil),
ReadinessProbe:(*core.Probe)(nil),
StartupProbe:(*core.Probe)(nil),
Lifecycle:(*core.Lifecycle)(nil),
TerminationMessagePath:"/dev/termination-log",
TerminationMessagePolicy:"File",
ImagePullPolicy:"IfNotPresent",
SecurityContext:(*core.SecurityContext)(0xc00ef89800),
Stdin:false,
StdinOnce:false,
TTY:false
}
},
EphemeralContainers:[]core.EphemeralContainer(nil),
RestartPolicy:"OnFailure",
TerminationGracePeriodSeconds:(*int64)(0xc008d185b0),
ActiveDeadlineSeconds:(*int64)(nil),
DNSPolicy:"ClusterFirst",
NodeSelector:map[string]string {
"kubernetes.io/os":"linux"
},
ServiceAccountName:"ingress-nginx-admission",
AutomountServiceAccountToken:(*bool)(nil),
NodeName:"",
SecurityContext:(*core.PodSecurityContext)(0xc00ff85280),
ImagePullSecrets:[]core.LocalObjectReference(nil),
Hostname:"",
Subdomain:"",
SetHostnameAsFQDN:(*bool)(nil),
Affinity:(*core.Affinity)(nil),
SchedulerName:"default-scheduler",
Tolerations:[]core.Toleration(nil),
HostAliases:[]core.HostAlias(nil),
PriorityClassName:"",
Priority:(*int32)(nil),
PreemptionPolicy:(*core.PreemptionPolicy)(nil),
DNSConfig:(*core.PodDNSConfig)(nil),
ReadinessGates:[]core.PodReadinessGate(nil),
RuntimeClassName:(*string)(nil),
Overhead:core.ResourceList(nil),
EnableServiceLinks:(*bool)(nil),
TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil)
}
}
Enrico, I can not reproduce this. Any chance to get access as suggested in comment 4?
OK, just tested on your cloud and it seems to work:
[...]
storageclass.storage.k8s.io/cinder-default created
Deploy NGINX ingress controller to kurt
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
configmap/ingress-nginx-controller created clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created
ingressclass.networking.k8s.io/nginx created validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
serviceaccount/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
Wait for control plane of kurt
Switched to context "kind-kind".
cluster.cluster.x-k8s.io/kurt condition met
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx ingress-nginx-admission-create-7rgg5 0/1 Pending 0 1s
ingress-nginx ingress-nginx-admission-patch-txr74 0/1 Pending 0 1s
ingress-nginx ingress-nginx-controller-74cb6699df-2dcsx 0/1 Pending 0 2s
kube-system calico-kube-controllers-958545d87-sv98q 0/1 Pending 0 53s
[...]
and a bit later:
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx ingress-nginx-admission-create-7rgg5 0/1 Completed 0 5m36s
ingress-nginx ingress-nginx-admission-patch-txr74 0/1 Completed 2 5m36s
ingress-nginx ingress-nginx-controller-74cb6699df-2dcsx 1/1 Running 0 5m37s
kube-system calico-kube-controllers-958545d87-sv98q 1/1 Running 0 6m28s
[...]
This looks good to me. Am I overlooking something? (This is 1.21.10 with OCCM and CSI from git, let me try again with 1.21.9 and old included OCCM and CSI.)
Found an issue with deploying the old cindercsi driver. Fixed now. Looks to me like this stopped the deployment of services early, thus the nginx deployment would not happen any more. Should be fixed. The script ~/bin/apply_cindercsi.sh on your mgmtcluster host already contains the fix (hand-edited).
Let me know if this fixes the issue for you as well.
This issue is fixed.
As for how to migrate to versions with the latest fixes:
PR #176 changed the way the management node is set up to make it pull from git directly.
This allows incremental updates without redeploying the management node by just logging in to it and doing git pull
and create_cluster.sh
again.
Hi,
after a while i played a little bit around again with the current version on a osism deployed wallaby. Looks like nginx ingress isnt working anymore even if specified to deploy in the clusterctl yaml.
KUBERNETES_VERSION: v1.21.9 OPENSTACK_IMAGE_NAME: ubuntu-capi-image-v1.21.9 DEPLOY_NGINX_INGRESS: true
after deploying a cluster with create_cluster.sh i noticed nginx ingress is missing (the create_cluster.sh script also drops errors at the end that it could not find the clusterctl-MYNAME.yml file ($MYNAME = the equiv of the file i used with create_cluster.sh), but the cluster is provisioned without any issue outside of nginx ingress missing.
running apply_nginx_ingress.sh MYNAME is giving the following errors:
The Job "ingress-nginx-admission-create" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-admission-create", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/component":"admission-webhook", "app.kubernetes.io/instance":"ingress-nginx", "app.kubernetes.io/managed-by":"Helm", "app.kubernetes.io/name":"ingress-nginx", "app.kubernetes.io/part-of":"ingress-nginx", "app.kubernetes.io/version":"1.0.1", "controller-uid":"09232ee3-59ed-4aec-89d1-ca347a9fcf92", "helm.sh/chart":"ingress-nginx-4.0.2", "job-name":"ingress-nginx-admission-create"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume(nil), InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"create", Image:"k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.0@sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068", Command:[]string(nil), Args:[]string{"create", "--host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc", "--namespace=$(POD_NAMESPACE)", "--secret-name=ingress-nginx-admission"}, WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar{core.EnvVar{Name:"POD_NAMESPACE", Value:"", ValueFrom:(*core.EnvVarSource)(0xc01007e6a0)}}, Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*core.SecurityContext)(0xc00ef89800), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:"OnFailure", TerminationGracePeriodSeconds:(*int64)(0xc008d185b0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"ingress-nginx-admission", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc00ff85280), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", SetHostnameAsFQDN:(*bool)(nil), Affinity:(*core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), PreemptionPolicy:(*core.PreemptionPolicy)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(*bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil)}}: field is immutable
It will create the nginx-ingress Loadbalancer and also the admission pods etc. However admission pod is complaining about something like that i cannot get some secret somewhere. Curious. Maybe someone can replicate? i tried a few times now, always the same.