pulumi / pulumi-google-native

Apache License 2.0
72 stars 18 forks source link

FieldMask errors which don't seem to make sense. #649

Open solomonshorser opened 2 years ago

solomonshorser commented 2 years ago

What happened?

Pulumi up results in FieldMask errors. I don't think it should have happened. The only change that triggered this was using ignoreChanges, but the properties that I added to ignoreChanges don't seem to be the ones that this error mentions.

Steps to reproduce

  1. Have a GCP project with dataproc and Composer v2 environment.
  2. Add ignoreChanges: ['rotationPeriod', 'nextRotationTime'] to the config of the CryptoKey that composer object dependsOn.
  3. Add ignoreChanges: ['iamConfiguration'] to the config of the buckets (temp and config) that are used by the dataproc cluster for configBucket and tempBucket.
  4. Run pulumi preview --diff and observe that the diff doesn't show any changes for the composer cryptokey or the buckets.
  5. Run pulumi up and get an error.

Expected Behavior

I expected Pulumi to ignore the cryptokey, since I had told Pulumi to ignore the only fields that had changes. I expected Pulumi to ignore the dataproc's cluster's buckets, but to update the cluster's main config (which had some changes).

Actual Behavior

Pulumi attempted to update the cluster and failed (I've tried to manually format the output to make it a bit more readable):

google-native:composer/v1:Environment (my_composerv2_environment):
    error: error sending request: googleapi: Error 400: config is not a supported FieldMask path:
"[https://composer.googleapis.com/v1/projects/MY_PROJECT/locations/MY_REGION/environments/my_composerv2_env?updateMask=config,state"]
(https://composer.googleapis.com/v1/projects/MY_PROJECT/locations/MY_REGION/environments/my_composerv2_env?updateMask=config,state%22) 

map[ config:
  map[ encryptionConfig:
    map[ kmsKeyName: projects/MY_PROJECT/locations/MY_REGION/my_composerv2_keyring/cryptoKeys/my_composerv2_env ]
    environmentSize:ENVIRONMENT_SIZE_SMALL
    nodeConfig:
      map[ ipAllocationPolicy:
        map[ clusterSecondaryRangeName:pods servicesSecondaryRangeName:services]
        network:projects/OTHER_PROJECT/global/networks/my-vpc-network-1 
        subnetwork:projects/OTHER_PROJECT/regions/MY_REGION/subnetworks/my_subnet
      ] 
      privateEnvironmentConfig:
      map[ enablePrivateEnvironment:true
        privateClusterConfig:
          map[enablePrivateEndpoint:true]
      ] 
      softwareConfig:
        map[ airflowConfigOverrides:
          map[ core-dags_are_paused_at_creation:True
            scheduler-catchup_by_default:False
            secrets-backend:airflow.providers.google.cloud.secrets.secret_manager.CloudSecretManagerBackend
            secrets-backend_kwargs:{"secretname": "SECRETVALUE"}
          ]
          imageVersion:composer-2.0.9-airflow-2.2.3
        ] 
        workloadsConfig:
          map[ scheduler:
            map[ count:1
              cpu:0.5
              memoryGb:1.875
              storageGb:1
            ]
            webServer:
              map[ cpu:2
                memoryGb:7.5
                storageGb:5
              ]
            worker:
              map[ cpu:0.5
                maxCount:3
                memoryGb:1.875
                minCount:1
                storageGb:1
              ]
            ]
          ] 
  name:projects/MY_PROJECT/locations/us-east4/environments/my_composerv2_env
]

google-native:dataproc/v1:Cluster (dp_cluster):
  error: error sending request: googleapi: Error 400: FieldMask path 'config' must be one of the following: 
[
  config.worker_config.num_instances,
  config.secondary_worker_config.num_instances,
  config.lifecycle_config.auto_delete_ttl,
  config.lifecycle_config.auto_delete_time,
  config.lifecycle_config.idle_delete_ttl,
  config.autoscaling_config.policy_uri, labels
].:
"[https://dataproc.googleapis.com/v1/projects/MY_PROJECT/regions/MY_REGION/clusters/my_dataproc_cluster?updateMask=config"]
(https://dataproc.googleapis.com/v1/projects/MY_PROJECT/regions/MY_REGION/clusters/my_dataproc_cluster?updateMask=config%22)

map[ clusterName:my_dataproc_cluster
  config:
    map[ autoscalingConfig:
      map[ policyUri:projects/MY_PROJECT/regions/MY_REGION/autoscalingPolicies/dev-autoscale-policy
      ]
      configBucket:my_config_bucket
      encryptionConfig:
        map[ gcePdKmsKeyName:projects/MY_PROJECT/locations/MY_REGION/keyRings/my_keyring/cryptoKeys/my_dataproc_key
      ]
      endpointConfig:
        map[ enableHttpPortAccess:true
      ]
      gceClusterConfig:
        map[ internalIpOnly:false
          serviceAccountScopes:[https://www.googleapis.com/auth/cloud-platform]
          shieldedInstanceConfig:
            map[ enableIntegrityMonitoring:true
              enableSecureBoot:true
              enableVtpm:true]
            subnetworkUri:projects/OTHER_PROJECT/regions/MY_REGION/subnetworks/my_subnet
            tags:[my_dataproc]
        ]
      initializationActions:[
        map[ executableFile:gs://my_bucket/boot.sh
          executionTimeout:600s]
        ]
      masterConfig:
        map[ diskConfig:
          map[ bootDiskSizeGb:50
            bootDiskType:pd-standard
          ]
          machineTypeUri:c2-standard-4
          numInstances:1
        ]
        secondaryWorkerConfig:
          map[ numInstances:2
        ]
        softwareConfig:
          map[
            imageVersion:2.0.27-debian10
            optionalComponents:[JUPYTER]
          ]
      tempBucket:my_temp_bucket
      workerConfig:
        map[ diskConfig:
          map[ bootDiskSizeGb:100
            bootDiskType:pd-standard
          ]
          machineTypeUri:c2-standard-8
          numInstances:2
        ]
      ]
    projectId:MY_PROJECT_ID
  ]

Output of pulumi about

CLI          
Version      3.38.0
Go Version   go1.19
Go Compiler  gc

Plugins
NAME           VERSION
gcp            5.14.0
google-native  0.18.0
google-native  0.14.0
kubernetes     3.19.0
nodejs         unknown
random         4.2.0

Host     
OS       ubuntu
Version  20.04
Arch     x86_64

This project is written in nodejs: executable='/azp/_work/_tool/node/14.20.0/x64/bin/node' version='v14.20.0'

[stacks have been removed]

Dependencies:
NAME                   VERSION
@pulumi/gcp            5.14.0
@pulumi/google-native  0.18.0
@pulumi/pulumi         3.8.0
@pulumi/random         4.2.0
@tr-cdk/config         1.0.2
@tr-gcp-pulumi/common  0.5.0
@types/node            10.17.60

Additional context

No response

Contributing

Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).

solomonshorser commented 2 years ago

Upgrading to newer version of the google-native library and the gcp library seem to have resolved this. I will probably close the issue in a few days if the problem does not happen again.

solomonshorser commented 1 year ago

It's been almost a year, but it's happening again. :( Error message:

  google-native:dataproc/v1:Cluster (my_dataproc_cluster):
    error: error sending request: googleapi: Error 400: FieldMask path 'config' must be one of the following: [config.worker_config.num_instances, config.lifecycle_config.idle_delete_ttl, config.lifecycle_config.auto_delete_ttl, config.lifecycle_config.auto_delete_time, config.secondary_worker_config.num_instances, config.autoscaling_config.policy_uri, labels].: "https://dataproc.googleapis.com/v1/projects/PROJECTID/regions/us-east4/clusters/MYCLUSTER?updateMask=config" map[clusterName:MYCLUSTER config:map[autoscalingConfig:map[policyUri:projects/PROJECTID/regions/us-east4/autoscalingPolicies/dev-autoscale-policy] configBucket:MYCLUSTER-config encryptionConfig:map[gcePdKmsKeyName:projects/PROJECTID/locations/us-east4/keyRings/MYKEYRING/cryptoKeys/MYCLUSTER] endpointConfig:map[enableHttpPortAccess:true] gceClusterConfig:map[internalIpOnly:false serviceAccountScopes:[https://www.googleapis.com/auth/cloud-platform] shieldedInstanceConfig:map[enableIntegrityMonitoring:true enableSecureBoot:true enableVtpm:true] subnetworkUri:projects/PROJECTID2/regions/us-east4/subnetworks/MYSUBNET tags:[MYCLUSTER]] initializationActions:[map[executableFile:gs://MYBUCKET/bootstrap/bootstrap_v2.sh executionTimeout:600s]] masterConfig:map[diskConfig:map[bootDiskSizeGb:50 bootDiskType:pd-standard] machineTypeUri:c2-standard-4 numInstances:1] secondaryWorkerConfig:map[numInstances:0] softwareConfig:map[imageVersion:2.0.27-debian10 optionalComponents:[JUPYTER] properties:map[capacity-scheduler:yarn.scheduler.capacity.root.default.ordering-policy:fair core:fs.gs.block.size:134217728 core:fs.gs.metadata.cache.enable:false core:hadoop.ssl.enabled.protocols:TLSv1,TLSv1.1,TLSv1.2 distcp:mapreduce.map.java.opts:-Xmx768m distcp:mapreduce.map.memory.mb:1024 distcp:mapreduce.reduce.java.opts:-Xmx768m distcp:mapreduce.reduce.memory.mb:1024 hadoop-env:HADOOP_DATANODE_OPTS:-Xmx512m hdfs:dfs.datanode.address:0.0.0.0:9866 hdfs:dfs.datanode.http.address:0.0.0.0:9864 hdfs:dfs.datanode.https.address:0.0.0.0:9865 hdfs:dfs.datanode.ipc.address:0.0.0.0:9867 hdfs:dfs.namenode.handler.count:40 hdfs:dfs.namenode.http-address:0.0.0.0:9870 hdfs:dfs.namenode.https-address:0.0.0.0:9871 hdfs:dfs.namenode.lifeline.rpc-address:MYCLUSTER-m:8050 hdfs:dfs.namenode.secondary.http-address:0.0.0.0:9868 hdfs:dfs.namenode.secondary.https-address:0.0.0.0:9869 hdfs:dfs.namenode.service.handler.count:20 hdfs:dfs.namenode.servicerpc-address:MYCLUSTER-m:8051 hive:hive.fetch.task.conversion:none mapred-env:HADOOP_JOB_HISTORYSERVER_HEAPSIZE:4000 mapred:mapreduce.job.maps:93 mapred:mapreduce.job.reduce.slowstart.completedmaps:0.95 mapred:mapreduce.job.reduces:31 mapred:mapreduce.jobhistory.recovery.store.class:org.apache.hadoop.mapreduce.v2.hs.HistoryServerLeveldbStateStoreService mapred:mapreduce.map.cpu.vcores:1 mapred:mapreduce.map.java.opts:-Xmx2828m mapred:mapreduce.map.maxattempts:10 mapred:mapreduce.map.memory.mb:3536 mapred:mapreduce.reduce.cpu.vcores:1 mapred:mapreduce.reduce.java.opts:-Xmx2828m mapred:mapreduce.reduce.maxattempts:10 mapred:mapreduce.reduce.memory.mb:3536 mapred:mapreduce.task.io.sort.mb:256 mapred:yarn.app.mapreduce.am.command-opts:-Xmx2828m mapred:yarn.app.mapreduce.am.resource.cpu-vcores:1 mapred:yarn.app.mapreduce.am.resource.mb:3536 spark-env:SPARK_DAEMON_MEMORY:4000m spark:spark.driver.maxResultSize:2048m spark:spark.driver.memory:4096m spark:spark.executor.cores:4 spark:spark.executor.instances:2 spark:spark.executor.memory:12859m spark:spark.executorEnv.OPENBLAS_NUM_THREADS:1 spark:spark.scheduler.mode:FAIR spark:spark.sql.cbo.enabled:true spark:spark.stage.maxConsecutiveAttempts:10 spark:spark.task.maxFailures:10 spark:spark.ui.port:0 spark:spark.yarn.am.attemptFailuresValidityInterval:1h spark:spark.yarn.am.memory:640m spark:spark.yarn.executor.failuresValidityInterval:1h yarn-env:YARN_NODEMANAGER_HEAPSIZE:3276 yarn-env:YARN_RESOURCEMANAGER_HEAPSIZE:4000 yarn-env:YARN_TIMELINESERVER_HEAPSIZE:4000 yarn:yarn.nodemanager.address:0.0.0.0:8026 yarn:yarn.nodemanager.resource.cpu-vcores:8 yarn:yarn.nodemanager.resource.memory-mb:28288 yarn:yarn.resourcemanager.am.max-attempts:10 yarn:yarn.resourcemanager.nodemanager-graceful-decommission-timeout-secs:86400 yarn:yarn.scheduler.maximum-allocation-mb:28288 yarn:yarn.scheduler.minimum-allocation-mb:1]] tempBucket:MYCLUSTER-temp workerConfig:map[diskConfig:map[bootDiskSizeGb:100 bootDiskType:pd-standard] machineTypeUri:c2-standard-8 numInstances:2]] projectId:PROJECTID]

  google-native:composer/v1:Environment (my_composer_env):
    error: error sending request: googleapi: Error 400: config is not a supported FieldMask path: "https://composer.googleapis.com/v1/projects/PROJECTID/locations/us-east4/environments/MYCOMPOSERENV?updateMask=config" map[config:map[encryptionConfig:map[kmsKeyName:projects/PROJECTID/locations/us-east4/keyRings/MYKEYRING/cryptoKeys/MYCOMPOSERENV] environmentSize:ENVIRONMENT_SIZE_SMALL nodeConfig:map[ipAllocationPolicy:map[clusterSecondaryRangeName:pods servicesSecondaryRangeName:services] network:projects/PROJECTID2/global/networks/VPC1 subnetwork:projects/PROJECTID2/regions/us-east4/subnetworks/COMPOSERSUBNET] privateEnvironmentConfig:map[enablePrivateEnvironment:true privateClusterConfig:map[enablePrivateEndpoint:true]] softwareConfig:map[airflowConfigOverrides:map[core-dags_are_paused_at_creation:True scheduler-catchup_by_default:False secrets-backend:airflow.providers.google.cloud.secrets.secret_manager.CloudSecretManagerBackend secrets-backend_kwargs:{"project_id": "PROJECTID3"}] imageVersion:composer-2.1.7-airflow-2.4.3] workloadsConfig:map[scheduler:map[count:1 cpu:0.5 memoryGb:1.875 storageGb:1] webServer:map[cpu:2 memoryGb:7.5 storageGb:5] worker:map[cpu:0.5 maxCount:3 memoryGb:1.875 minCount:1 storageGb:1]]] name:projects/PROJECTID/locations/us-east4/environments/MYCOMPOSERENV]

Current dependencies are:


NAME                   VERSION
@pulumi/gcp            6.59.0
@pulumi/google-native  0.31.0
@pulumi/pulumi         3.75.0
@pulumi/random         4.2.0
@types/node            10.17.60

I suppose I could downgrade to the exact versions I was using that seemed to fix this last time, but I'm concerned that might cause other problems.

Does anyone know why this happens?

solomonshorser commented 1 year ago

I tried reverting to what seemed to work last year:

"@pulumi/gcp": "6.35.0",
"@pulumi/google-native": "0.23.0",
"@pulumi/pulumi": "3.8.0",

but that didn't seem to work this time.

mikhailshilkov commented 1 year ago

@solomonshorser Could you please share a code snippet that I could use to repro the issue? Thank you!

solomonshorser commented 1 year ago

@mikhailshilkov

A supporting class to manage the creation of the composer environment:

import * as pulumi from "@pulumi/pulumi";
import * as gcp from "@pulumi/google-native";
import { config } from "./config";
import { EnvironmentConfigEnvironmentSize } from "@pulumi/google-native/composer/v1";

export class Composer {
    constructor(dependencies: pulumi.Resource[]) {
        const composerKey = new gcp.cloudkms.v1.CryptoKey('composer_key', {
            keyRingId: config.keyRing,
            location: config.region,
            purpose: "ENCRYPT_DECRYPT",
            project: config.commonProject.id,
            cryptoKeyId: config.cloudComposer.kms
        }, {
            dependsOn: dependencies,
            ignoreChanges: ['rotationPeriod', 'nextRotationTime']
        });

        const cloudComposer = new gcp.composer.v1.Environment('cloud_composer', {
            name: `projects/${config.commonProject.id}/locations/${config.region}/environments/${config.cloudComposer.name}`,
            project: config.commonProject.id,
            location: config.region,
            config: {
                environmentSize: config.cloudComposer.environmentSize as EnvironmentConfigEnvironmentSize,
                privateEnvironmentConfig: {
                    enablePrivateEnvironment: true,
                    privateClusterConfig: {
                        enablePrivateEndpoint: true
                    }
                },
                nodeConfig: {
                    ipAllocationPolicy: {
                        clusterSecondaryRangeName: config.cloudComposer.clusterSecondaryRangeName,
                        servicesSecondaryRangeName: config.cloudComposer.servicesSecondaryRangeName,
                    },
                    network: `projects/${config.hostProject.id}/global/networks/${config.hostProject.network}`,
                    subnetwork: `projects/${config.hostProject.id}/regions/${config.region}/subnetworks/${config.cloudComposer.subnet}`
                },
                softwareConfig: {
                    imageVersion: config.cloudComposer.imageVersion,
                    airflowConfigOverrides: {
                        "secrets-backend": "airflow.providers.google.cloud.secrets.secret_manager.CloudSecretManagerBackend",
                        "secrets-backend_kwargs": `{"project_id": "${config.secretsProject.id}"}`,
                        "core-dags_are_paused_at_creation": "True",
                        "scheduler-catchup_by_default": "False"
                    }
                },
                encryptionConfig: {
                    kmsKeyName: `projects/${config.commonProject.id}/locations/${config.region}/keyRings/${config.keyRing}/cryptoKeys/${config.cloudComposer.kms}`
                },
                workloadsConfig: config.cloudComposer.workloadsConfig
            }
        }, {
            dependsOn: composerKey
        });
    }
}

In our main program:

const keyRing = new gcp.cloudkms.v1.KeyRing('project_keyring', {
    location: config.region,
    project: config.commonProject.id,
    keyRingId: config.keyRing,
}, {
    dependsOn: commonProjectServices
});

const keyRingPolicy = new gcp.cloudkms.v1.KeyRingIamPolicy(`keyring_policy`,
    {
        keyRingId: config.keyRing,
        location: config.region,
        project: config.commonProject.id,
        bindings: [{
            role: 'roles/cloudkms.cryptoKeyEncrypterDecrypter',
            members: [/* long list of members removed... */]
        }]
    }, {
    dependsOn: [keyRing, commonProjectServices]
});

if (config.cloudComposer.enabled) {
    const cloudComposer = new Composer([
        commonProjectServices,
        keyRing,
        keyRingPolicy
    ]);
}

FYI: I also get a similar "Error 400: config is not a supported FieldMask path:" error with objects of type google-native:dataproc/v1:Cluster. I wasn't sure if I should post more details about that to this issue, or if it should be a separate issue (since it's a different type).