pulumi / pulumi-kubernetes

A Pulumi resource provider for Kubernetes to manage API resources and workloads in running clusters
https://www.pulumi.com/docs/reference/clouds/kubernetes/
Apache License 2.0
396 stars 113 forks source link

Deployment resource state does not update #3026

Closed soujiro32167 closed 1 month ago

soujiro32167 commented 1 month ago

What happened?

Using a kubernetes provider with renderYamlToDirectory, I want to create a deployment and configmap.

The resources get created successfully. After a second pulumi up, no changes are observed.

However, after updating the deployment, the pulumi state will not update. Updating the configmap works fine

To reproduce:

  1. Run the reproducer file with a clean stack: pulumi up -s repro
  2. Update the deployment - containerPort: 9090
  3. pulumi up -s repro --diff shows the diff correctly
  4. Click yes to apply the changes
  5. pulumi up -s repro --diff shows the same diff, even though the change was already applied

Note: the yaml yamls/apps_v1-deployment-myns-my-deployment.yaml gets updated correctly

Example

import * as k8s from "@pulumi/kubernetes"
import * as pulumi from "@pulumi/pulumi"

const provider = new k8s.Provider("k8s", {
    renderYamlToDirectory: 'yamls',
    namespace: 'myns'
})

const configMap = new k8s.core.v1.ConfigMap("my-configmap", {
    metadata: {
        name: "my-configmap",
    },
    data: {
        "key1": "value1",
        "key2": "value2",
    },
}, {provider})

const deployment = new k8s.apps.v1.Deployment("my-deployment", {
    metadata: {
        name: "my-deployment",
    },
    spec: {
        replicas: 1,
        selector: {
            matchLabels: {
                app: "my-deployment",
            },
        },
        template: {
            metadata: {
                labels: {
                    app: "my-deployment",
                },
            },
            spec: {
                containers: [
                    {
                        name: "my-deployment",
                        image: "nginx",
                        ports: [
                            {
                                containerPort: 8080,
                            },
                        ],
                    },
                ],
            },
        },
    },
}, {provider})

Output of pulumi about

➜  typescript git:(main) ✗ pulumi about
CLI          
Version      3.116.1
Go Version   go1.22.2
Go Compiler  gc

Plugins
KIND      NAME        VERSION
resource  aws         6.33.1
resource  kafka       3.7.1
resource  kubernetes  4.11.0
language  nodejs      unknown
resource  postgresql  3.11.0

Host     
OS       darwin
Version  14.4.1
Arch     arm64

This project is written in nodejs: executable='***' version='v20.11.1'

Backend        
Name           ***
URL            file://~
User           ***
Organizations  
Token type     personal

Dependencies:
NAME                VERSION
@pulumi/kafka       3.7.1
@pulumi/kubernetes  4.11.0
@pulumi/pulumi      3.115.1
ts-pattern          5.0.6
yaml                2.4.2
typescript          5.4.5
@pulumi/aws         6.33.1
@pulumi/postgresql  3.11.0
@types/node         20.10.5

Pulumi locates its logs in /var/folders/s_/rr4bg4qx7hv__l9g4fqw987w0000gq/T/ by default
warning: Failed to get information about the current stack: No current stack

Additional context

No response

Contributing

Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).

rquitales commented 1 month ago

Thanks for reporting this issue @soujiro32167. I am able to reproduce this. Note that the same issue also exists for ConfigMaps if we set CMs to be mutable within our k8s provider setup. The issue doesn't exist for CMs in the repro, since our provider is doing a replacement, which goes through a different flow for saving state compared to an update flow.

The bug most likely is being triggered here: https://github.com/pulumi/pulumi-kubernetes/blob/41b0d90d1f9e1c552878671519e241b358615ece/provider/pkg/provider/provider.go#L2365

We should store newInputs instead of oldLive similar to what we do in the Create flow (ref: https://github.com/pulumi/pulumi-kubernetes/blob/41b0d90d1f9e1c552878671519e241b358615ece/provider/pkg/provider/provider.go#L1865).

soujiro32167 commented 1 month ago

Thank you!