kubernetes-sigs / cluster-api-provider-aws

Kubernetes Cluster API Provider AWS provides consistent deployment and day 2 operations of "self-managed" and EKS Kubernetes clusters on AWS.
http://cluster-api-aws.sigs.k8s.io/
Apache License 2.0
626 stars 542 forks source link

🐛 fix(nodegroup): add nil pointer check for nodegroup version #5019

Open nueavv opened 2 weeks ago

nueavv commented 2 weeks ago

Added a check to ensure that the nodegroup version is not nil before dereferencing it. This prevents potential runtime panics due to nil pointer dereference.

What type of PR is this?

/kind bug

What this PR does / why we need it:

This PR adds a nil pointer check for the nodegroup version in the reconcileNodegroupVersion function. This prevents potential runtime panics caused by dereferencing a nil pointer when the nodegroup version is not set.

Which issue(s) this PR fixes: Fixes #5018

Special notes for your reviewer:

This change is essential to ensure the stability of the reconcileNodegroupVersion function by preventing nil pointer dereference issues.

Checklist:

Release note:


Fix nil pointer dereference in reconcileNodegroupVersion by adding a check for nodegroup version.
k8s-ci-robot commented 2 weeks ago

Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
linux-foundation-easycla[bot] commented 2 weeks ago

CLA Signed


The committers listed above are authorized under a signed CLA.

k8s-ci-robot commented 2 weeks ago

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: Once this PR has been reviewed and has the lgtm label, please assign neolit123 for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files: - **[OWNERS](https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/main/OWNERS)** Approvers can indicate their approval by writing `/approve` in a comment Approvers can cancel approval by writing `/approve cancel` in a comment
k8s-ci-robot commented 2 weeks ago

Welcome @nueavv!

It looks like this is your first PR to kubernetes-sigs/cluster-api-provider-aws 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/cluster-api-provider-aws has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. :smiley:

k8s-ci-robot commented 2 weeks ago

Hi @nueavv. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
nueavv commented 2 weeks ago

I have built and tested the updated code, and it seems that the panic no longer occurs. Here are the updated logs. It seems the panic issue has been resolved

I0613 04:01:44.064508       1 reconcile.go:39] "Reconciling kube-proxy DaemonSet in cluster" controller="awsmanagedcontrolplane" controllerGroup="controlplane.cluster.x-k8s.io" controllerKind="AWSManagedControlPlane" AWSManagedControlPlane="capi-managed-cluster/[REDACTED]" namespace="capi-managed-cluster" name="[REDACTED]" reconcileID="a4ce3943-5cdb-4467-aba9-18a759fd6cb2" awsManagedControlPlane="capi-managed-cluster/[REDACTED]" cluster="capi-managed-cluster/[REDACTED]"
I0613 04:01:44.065051       1 reconcile.go:39] "Reconciling aws-iam-authenticator configuration" controller="awsmanagedcontrolplane" controllerGroup="controlplane.cluster.x-k8s.io" controllerKind="AWSManagedControlPlane" AWSManagedControlPlane="capi-managed-cluster/[REDACTED]" namespace="capi-managed-cluster" name="[REDACTED]" reconcileID="a4ce3943-5cdb-4467-aba9-18a759fd6cb2" awsManagedControlPlane="capi-managed-cluster/[REDACTED]" cluster="capi-managed-cluster/[REDACTED]"
E0613 04:01:44.386103       1 controller.go:329] "Reconciler error" err="failed to reconcile machine pool for AWSManagedMachinePool capi-managed-cluster/[REDACTED]: failed to reconcile nodegroup version: nodegroup version is nil" controller="awsmanagedmachinepool" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="AWSManagedMachinePool" AWSManagedMachinePool="capi-managed-cluster/[REDACTED]" namespace="capi-managed-cluster" name="[REDACTED]" reconcileID="3e0c724f-236b-4262-9c06-e22f83b0eacd"
I0613 04:01:44.386891       1 awsmanagedmachinepool_controller.go:200] "Reconciling AWSManagedMachinePool"
I0613 04:01:44.387068       1 launchtemplate.go:73] "checking for existing launch template"
I0613 04:01:44.439150       1 reconcile.go:90] "Reconciled aws-iam-authenticator configuration" controller="awsmanagedcontrolplane" controllerGroup="controlplane.cluster.x-k8s.io" controllerKind="AWSManagedControlPlane" AWSManagedControlPlane="capi-managed-cluster/[REDACTED]" namespace="capi-managed-cluster" name="[REDACTED]" reconcileID="a4ce3943-5cdb-4467-aba9-18a759fd6cb2" awsManagedControlPlane="capi-managed-cluster/[REDACTED]" cluster="[REDACTED]"
E0613 04:01:44.858183       1 controller.go:329] "Reconciler error" err="failed to reconcile machine pool for AWSManagedMachinePool capi-managed-cluster/[REDACTED]: failed to reconcile nodegroup version: nodegroup version is nil" controller="awsmanagedmachinepool" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="AWSManagedMachinePool" AWSManagedMachinePool="capi-managed-cluster/[REDACTED]" namespace="capi-managed-cluster" name="[REDACTED]" reconcileID="9bec8c74-79d0-4b39-bd40-607a5d899630"
I0613 04:01:44.859110       1 awsmanagedmachinepool_controller.go:200] "Reconciling AWSManagedMachinePool"
I0613 04:01:44.859320       1 launchtemplate.go:73] "checking for existing launch template"
E0613 04:01:45.297763       1 controller.go:329] "Reconciler error" err="failed to reconcile machine pool for AWSManagedMachinePool capi-managed-cluster/[REDACTED]: failed to reconcile nodegroup version: nodegroup version is nil" controller="awsmanagedmachinepool" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="AWSManagedMachinePool" AWSManagedMachinePool="capi-managed-cluster/[REDACTED]" namespace="capi-managed-cluster" name="[REDACTED]" reconcileID="91a7cb58-d6f8-4e8f-aa06-62acce826b74"
I0613 04:01:45.298635       1 awsmanagedmachinepool_controller.go:200] "Reconciling AWSManagedMachinePool"
I0613 04:01:45.298821       1 launchtemplate.go:73] "checking for existing launch template"
E0613 04:01:45.743834       1 controller.go:329] "Reconciler error" err="failed to reconcile machine pool for AWSManagedMachinePool capi-managed-cluster/[REDACTED]: failed to reconcile nodegroup version: nodegroup version is nil" controller="awsmanagedmachinepool" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="AWSManagedMachinePool" AWSManagedMachinePool="capi-managed-cluster/[REDACTED]" namespace="capi-managed-cluster" name="[REDACTED]" reconcileID="ef2c12c6-5c2b-4553-a62e-f69ce561d6d9"
I0613 04:01:45.744691       1 awsmanagedmachinepool_controller.go:200] "Reconciling AWSManagedMachinePool"
I0613 04:01:45.744882       1 launchtemplate.go:73] "checking for existing launch template"
E0613 04:01:46.178107       1 controller.go:329] "Reconciler error" err="failed to reconcile machine pool for AWSManagedMachinePool capi-managed-cluster/[REDACTED]: failed to reconcile nodegroup version: nodegroup version is nil" controller="awsmanagedmachinepool" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="AWSManagedMachinePool" AWSManagedMachinePool="capi-managed-cluster/[REDACTED]" namespace="capi-managed-cluster" name="[REDACTED]" reconcileID="a0c1977f-6e74-499e-805c-147fe74d8e39"
I0613 04:01:46.178945       1 awsmanagedmachinepool_controller.go:200] "Reconciling AWSManagedMachinePool"
I0613 04:01:46.179150       1 launchtemplate.go:73] "checking for existing launch template"
richardcase commented 2 days ago

/ok-to-test