rancher / dashboard

The Rancher UI
https://rancher.com
Apache License 2.0
441 stars 241 forks source link

[BUG] Deployment yaml gets mangled at hostAlias after form edition #10942

Open baptisterajaut opened 2 months ago

baptisterajaut commented 2 months ago

Hey, i noticed i had this annoying bug.

Rancher Server Setup

User Information

Describe the bug When creating a host alias, everything is well. Example , here's the yaml of a service which works :

     hostAliases:
        - hostnames:
            - mysql
          ip: 192.168.10.10
      restartPolicy: Always

However when i went and edited it with the form, i can't apply it because Deployment in version "v1" cannot be handled as a Deployment: json: cannot unmarshal string into Go struct field HostAlias.spec.template.spec.hostAliases.hostnames of type []string

And as it stands, indeed the hostAliases spec is flipped:

      hostAliases:
        - ip: 192.168.10.10
          hostnames: mysql
#        - hostnames:
#            - string
#          ip: string
      hostNetwork: false
      hostname: my-db-hostname
      restartPolicy: Always

Result

Have to re-set the yaml by hand

Expected Result It shouldn't be necessary

Screenshots

Additional context I added the network hostname. I did not add the hostNetwork property.

rak-phillip commented 2 months ago

@baptisterajaut thanks for raising this issue and helping to make Rancher better!

I attempted to reproduce the issue you've described, but I was unable to generate the same error.

To help us better understand and triage this issue, could you provide a minimal manifest that generates the error? Can you also provide steps on how you're editing the host alias through the form?

baptisterajaut commented 2 months ago

I can replicate this issue 100%.

spec:
      affinity: {}
      containers:
        - image: nginx:latest
          imagePullPolicy: Always
          name: container-0
          resources: {}
          securityContext:
            allowPrivilegeEscalation: false
            privileged: false
            readOnlyRootFilesystem: false
            runAsNonRoot: false
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
      dnsConfig: {}
      dnsPolicy: ClusterFirst
      hostAliases:
        - hostnames:
            - somewhere
          ip: 1.1.1.1

This will deploy fine.

image

 spec:
      containers:
        - image: nginx:latest
          imagePullPolicy: Always
          name: container-0
          securityContext:
            allowPrivilegeEscalation: false
            privileged: false
            readOnlyRootFilesystem: false
            runAsNonRoot: false
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          _init: false
          __active: true
          resources: {}
      dnsPolicy: ClusterFirst
      hostAliases:
        - ip: 1.1.1.1
          hostnames: somewhere
#        - hostnames:
#            - string
#          ip: string
      hostNetwork: false
      hostname: myalias
      restartPolicy: Always
      schedulerName: default-scheduler
rak-phillip commented 2 months ago

@baptisterajaut thanks for the additional information, the issue has been confirmed.

This appears to be an issue with workload resources (deployments, replicasets, jobs, etc...) and doesn't appear on the pods form.

@gaktive transferring this issue to the Dashboard repo for additional triage.