kubernetes-sigs / cluster-api-provider-openstack

Cluster API implementation for OpenStack
https://cluster-api-openstack.sigs.k8s.io/
Apache License 2.0
292 stars 253 forks source link

Additional ports causing controller to crash #1917

Closed huxcrux closed 7 months ago

huxcrux commented 7 months ago

/kind bug

What steps did you take and what happened:

When using current main and specifying an additional port on the LB the controller crashes:

I0301 12:43:23.624476       1 securitygroups.go:41] "Reconciling security groups" controller="openstackcluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="OpenStackCluster" OpenStackCluster="default/hux-lab1" namespace="default" name="hux-lab1" reconcileID="2f98dabc-6331-496d-87e9-e1db0f27a170" cluster="hux-lab1"
I0301 12:43:24.194584       1 controller.go:115] "Observed a panic in reconciler: runtime error: index out of range [1] with length 1" controller="openstackcluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="OpenStackCluster" OpenStackCluster="default/hux-lab1" namespace="default" name="hux-lab1" reconcileID="2f98dabc-6331-496d-87e9-e1db0f27a170"
panic: runtime error: index out of range [1] with length 1 [recovered]
    panic: runtime error: index out of range [1] with length 1

goroutine 383 [running]:
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile.func1()
    /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:116 +0x1e5
panic({0x1f050e0?, 0xc000e16570?})
    /usr/local/go/src/runtime/panic.go:770 +0x132
sigs.k8s.io/cluster-api-provider-openstack/pkg/cloud/services/networking.getSGControlPlaneAdditionalPorts(...)
    /workspace/pkg/cloud/services/networking/securitygroups_rules.go:229
sigs.k8s.io/cluster-api-provider-openstack/pkg/cloud/services/networking.(*Service).generateDesiredSecGroups(0xc0009824b0, 0xc00072ec08, 0xc000b4f6a0)
    /workspace/pkg/cloud/services/networking/securitygroups.go:168 +0x8b1
sigs.k8s.io/cluster-api-provider-openstack/pkg/cloud/services/networking.(*Service).ReconcileSecurityGroups(0xc0009824b0, 0xc00072ec08, {0xc0006f5680, 0x10})
    /workspace/pkg/cloud/services/networking/securitygroups.go:66 +0x505
sigs.k8s.io/cluster-api-provider-openstack/controllers.reconcileNetworkComponents(0xc000db8e10, 0xc000023ba0, 0xc00072ec08)
    /workspace/controllers/openstackcluster_controller.go:616 +0x38d
sigs.k8s.io/cluster-api-provider-openstack/controllers.reconcileNormal(0xc000db8e10, 0xc000023ba0, 0xc00072ec08)
    /workspace/controllers/openstackcluster_controller.go:331 +0xd7
sigs.k8s.io/cluster-api-provider-openstack/controllers.(*OpenStackClusterReconciler).Reconcile(0xc00080d980, {0x23cdad8, 0xc000d17d70}, {{{0xc000a179c0?, 0x0?}, {0xc000a179f8?, 0xc000b46d50?}}})
    /workspace/controllers/openstackcluster_controller.go:155 +0x90f
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0x23d2688?, {0x23cdad8?, 0xc000d17d70?}, {{{0xc000a179c0?, 0xb?}, {0xc000a179f8?, 0x0?}}})
    /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:119 +0xb7
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0006b90e0, {0x23cdb10, 0xc0007f47d0}, {0x1e33860, 0xc0005e6660})
    /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:316 +0x3bc
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0006b90e0, {0x23cdb10, 0xc0007f47d0})
    /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:266 +0x1c9
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2()
    /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:227 +0x79
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2 in goroutine 261
    /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:223 +0x50c

What did you expect to happen: My cluster to become created with an additional port on the LB

Anything else you would like to add: N/A

Environment: