aws / aws-application-networking-k8s

A Kubernetes controller for Amazon VPC Lattice
https://www.gateway-api-controller.eks.aws.dev/
Apache License 2.0
175 stars 50 forks source link

env-config configmap losing values on helm upgrade #575

Closed hscheib closed 10 months ago

hscheib commented 11 months ago

Hi,

I just helm upgraded from v1.0.1 to v1.0.2 using mostly default values

  values:
    serviceAccount:
      create: false
      name: vpc-lattice-controller
    deployment:
      replicas: 2

Now the new pods fail to start with this kube event

3s          Warning   Failed              pod/gateway-api-controller-aws-gateway-controller-chart-647859v68n9         Error: couldn't find key awsRegion in ConfigMap aws-application-networking-system/env-config

I investigated and sure enough the defaulted empty fields are gone

apiVersion: v1
items:
- apiVersion: v1
  data:
    logLevel: info
  kind: ConfigMap

Originally the configmap looks like

apiVersion: v1
items:
- apiVersion: v1
  data:
    awsAccountId: ""
    awsRegion: ""
    clusterName: ""
    clusterVpcId: ""
    defaultServiceNetwork: ""
    latticeEndpoint: ""
    logLevel: info
  kind: ConfigMap

I did update my helm values to

  values:
    awsRegion: ""
    awsAccountId: ""
    clusterVpcId: ""
    clusterName: ""
    defaultServiceNetwork: ""
    latticeEndpoint: ""
    serviceAccount:
      create: false
      name: vpc-lattice-controller
    deployment:
      replicas: 2

and the helm upgrade succeeded and 1.0.2 pods spun up.

So I am not sure if the default values in the chart's values.yaml need to be set to "" instead of blank or I am required to supply "" values?

I followed https://www.gateway-api-controller.eks.aws.dev/guides/deploy/#controller-installation and am under the impression I did not need to supply these values from the comments since we are on EKS, don't want to create a defaultServiceNetwork, and expect IMDS to be available

   # awsRegion, clusterVpcId, awsAccountId are required for case IMDS is not available.
   --set=awsRegion= \
   --set=clusterVpcId= \
   --set=awsAccountId= \
   # clusterName is required except for EKS cluster.
   --set=clusterName= \
   # When specified, the controller will automatically create a service network with the name.
   --set=defaultServiceNetwork=my-hotel

Please advise, thanks!

zijun726911 commented 11 months ago

Hi hscheib, thank you to report this bug. I am able to reproduce this issue. If I install a brand new controller by:

helm install gateway-api-controller \
      oci://public.ecr.aws/aws-application-networking-k8s/aws-gateway-controller-chart\
      --version=v1.0.2 \
      --set=serviceAccount.create=false --namespace aws-application-networking-system

The controller starting up success and it has env variables:

kubectl get configMap env-config  -n aws-application-networking-system -o yaml
apiVersion: v1
data:
  awsAccountId: ""
  awsRegion: ""
  clusterName: ""
  clusterVpcId: ""
  defaultServiceNetwork: ""
  latticeEndpoint: ""
  logLevel: debug
kind: ConfigMap

However, if I upgrade the controller:

 helm upgrade gateway-api-controller \
      oci://public.ecr.aws/aws-application-networking-k8s/aws-gateway-controller-chart\
      --version=v1.0.2 \
      --set=serviceAccount.create=false --namespace aws-application-networking-system

The env variables disappear:

kubectl get configMap env-config  -n aws-application-networking-system -o yaml
apiVersion: v1
data:
  logLevel: debug
kind: ConfigMap
metadata:

(I tried the v1.0.0 to v.1.0.1 helm upgrade command it has the same issue)

I am working on fixing this issue and will let you know when this bug fix release.

mikhail-aws commented 11 months ago

@hscheib we removed config map from helm chart, makes variables mapping more straightforward. Controller dont use any features of configMap, except env vars.

solmonk commented 10 months ago

please check out the latest v1.0.3 release that removes configmap - thanks