pulumi / pulumi-google-native

Apache License 2.0
70 stars 18 forks source link

FieldMask path issue blocking update for Dataproc cluster #840

Open solomonshorser opened 1 year ago

solomonshorser commented 1 year ago

What happened?

I ran pulumi up, to make some changes to the configuration of a dataproc cluster, but it failed.

Expected Behavior

I expected Pulumi to update my dataproc cluster's configuration.

Steps to reproduce

  1. Modify the configuration of the dataproc cluster in my Pulumi program
  2. run pulumi up
  3. observe response message:

error sending request: googleapi: Error 400: FieldMask path 'config' must be one of the following: [config.worker_config.num_instances, config.secondary_worker_config.num_instances, config.lifecycle_config.auto_delete_ttl, config.lifecycle_config.auto_delete_time, config.lifecycle_config.idle_delete_ttl, config.autoscaling_config.policy_uri, labels].: "https://dataproc.googleapis.com/v1/projects/MY_PROJECT/regions/us-east4/clusters/MY_CLUSTER?updateMask=config" map[clusterName:MY_CLUSTER config:map[autoscalingConfig:map[policyUri:projects/MY_PROJECT/regions/us-east4/autoscalingPolicies/dev-autoscale-policy] configBucket:MY_CLUSTER-config encryptionConfig:map[gcePdKmsKeyName:projects/MY_PROJECT/locations/us-east4/keyRings/my-keyring/cryptoKeys/MY_CLUSTER] endpointConfig:map[enableHttpPortAccess:true] gceClusterConfig:map[internalIpOnly:false serviceAccountScopes:[https://www.googleapis.com/auth/cloud-platform] shieldedInstanceConfig:map[enableIntegrityMonitoring:true enableSecureBoot:true enableVtpm:true] subnetworkUri:projects/OTHER_PROJECT/regions/us-east4/subnetworks/MYSUBNET tags:[MY_CLUSTER]] initializationActions:[map[executableFile:gs://MY_BUCKET/startup.sh executionTimeout:600s]] lifecycleConfig:map[autoDeleteTtl:1209600s idleDeleteTtl:1209600s] masterConfig:map[diskConfig:map[bootDiskSizeGb:50 bootDiskType:pd-standard] machineTypeUri:c2-standard-4 numInstances:1] secondaryWorkerConfig:map[numInstances:0] softwareConfig:map[imageVersion:2.0.27-debian10 optionalComponents:[JUPYTER]] tempBucket:MY_CLUSTER-temp workerConfig:map[diskConfig:map[bootDiskSizeGb:100 bootDiskType:pd-standard] machineTypeUri:c2-standard-8 numInstances:2]] projectId:MY_PROJECT]

Output of pulumi about

CLI Version 3.55.0 Go Version go1.20 Go Compiler gc

Plugins NAME VERSION gcp 6.50.0 gcp 5.26.0 google-native 0.28.0 google-native 0.26.1 kubernetes 3.24.1 nodejs unknown random 4.2.0

Host OS darwin Version 13.2.1 Arch x86_64

This project is written in nodejs: executable='/usr/local/bin/node' version='v19.0.1'

Dependencies: NAME VERSION @pulumi/pulumi 3.55.0 @pulumi/random 4.2.0 @types/node 10.17.60 @pulumi/gcp 6.50.0 @pulumi/google-native 0.28.0

Additional context

No response

Contributing

Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).

rquitales commented 1 year ago

Hi @solomonshorser Would you be able to provide more information regarding how you're configuring your dataproc cluster? In particular, it seems like the config field is improperly set from that error message. A minimal code chunk to reproduce this issue would be great!

solomonshorser commented 1 year ago

@rquitales Code sample below:

        const cluster = new gcp.dataproc.v1.Cluster('my_cluster', {
            clusterName: config.dataProc.cluster.name,
            config: {
                autoscalingConfig: policy,
                softwareConfig: {
                    imageVersion: config.dataProc.imageVersion,
                    optionalComponents: config.dataProc.optionalComponents,
                },
                masterConfig: {
                    diskConfig: {
                        bootDiskSizeGb: config.dataProc.cluster.master.bootDiskSizeGb,
                        bootDiskType: config.dataProc.cluster.master.bootDiskType
                    },
                    machineTypeUri: config.dataProc.cluster.master.machineType,
                    numInstances: config.dataProc.cluster.master.numInstances,
                },
                workerConfig: {
                    diskConfig: {
                        bootDiskSizeGb: config.dataProc.cluster.worker.bootDiskSizeGb,
                        bootDiskType: config.dataProc.cluster.worker.bootDiskType,
                    },
                    machineTypeUri: config.dataProc.cluster.worker.machineType,
                    numInstances: config.dataProc.autoscale.workerConfig.minInstances,
                },
                secondaryWorkerConfig: {
                    numInstances: 0 
                },
                gceClusterConfig: {
                    subnetworkUri: `projects/${config.hostProject.id}/regions/${config.region}/subnetworks/${config.dataProc.subnet}`,
                    internalIpOnly: false,
                    tags: [config.dataProc.cluster.tag],
                    shieldedInstanceConfig: {
                        enableIntegrityMonitoring: true,
                        enableSecureBoot: true,
                        enableVtpm: true
                    },
                    // required for secret manager access
                    serviceAccountScopes: [
                        "https://www.googleapis.com/auth/cloud-platform"
                    ]
                },
                endpointConfig: {
                    enableHttpPortAccess: true
                },
                configBucket: config.buckets.config.name,
                tempBucket: config.buckets.temp.name,
                encryptionConfig: {
                    gcePdKmsKeyName: `projects/${config.commonProject.id}/locations/${config.region}/keyRings/${config.keyRing}/cryptoKeys/${config.dataProc.kms}`
                },
                initializationActions: [
                    {
                        executableFile: pulumi.interpolate`gs://${config.buckets.artifacts}/${config.dataProc.bootstrap.path}/${config.dataProc.bootstrap.version}`,
                        executionTimeout: config.dataProc.bootstrap.timeout
                    }
                ]
            },
            project: config.commonProject.id,
            region: config.region
        },
        {
            dependsOn: [
                configBucket,
                tempBucket
            ],
        });

(This worked long ago when the cluster was first created)