pulumi / pulumi-gcp

A Google Cloud Platform (GCP) Pulumi resource package, providing multi-language access to GCP
Apache License 2.0
182 stars 52 forks source link

Preview shows incorrect information regarding `maxThroughput` for Serverless VPC Access connector #1077

Open lrnq opened 1 year ago

lrnq commented 1 year ago

What happened?

If not set explicitly, the maximum throughput for a Serverless VPC Access connector appears to be determined by the maximum number of instances passed to pulumi_gcp.vpcaccess.Connector. In particular, it seems that max_throughput = 100 * max_instances if the max_throughput parameter is not specified. When running pulumi up for the first time (see steps to reproduce), this works as expected.

However, the preview shown when running pulumi up once again suggests that the Serverless VPC Access connector should be replaced, and that the maximum throughput should be set to the default value (300). While the old connector does indeed get deleted and replaced with a new one, the parameters appear to exactly the same as before replacement, i.e. max_throughput = 100 * max_instances is still satisfied after replacement. This pattern repeats itself for all future deployments.

Expected Behavior

I expect max_throughput = 100 * max_instances if max_throughput is not set explicitly.

Steps to reproduce

I will assume that the GCP project does not have the Serverless VPC Access API enabled, hence you can run this example in a new project as is.

Firstly, after setting up a new stack run pulumi up with the the following __main__.py file that (i) Enables the Serverless VPC Access API and (ii) Creates a Serverless VPC Access connector resource:

import pulumi
import pulumi_gcp

gcp_config = pulumi.Config("gcp")

# Enable the VPC API
vpc_api = pulumi_gcp.projects.Service(
    "vpc-api",
    service="vpcaccess.googleapis.com",
    disable_dependent_services=True,
    project=gcp_config.require("project"),
)

# Create a VPC connector
pulumi_gcp.vpcaccess.Connector(
    "vpc-connector",
    project=gcp_config.require("project"),
    region="europe-west1",
    name="vpc-connector",
    min_instances=2,
    max_instances=7,
    machine_type="e2-micro",
    ip_cidr_range="10.8.0.0/28",
    network="default",
    opts=pulumi.ResourceOptions(depends_on=[vpc_api]),
)

Secondly, run pulumi up once again and the preview will show:

Previewing update (pulumi-issue):
     Type                        Name                       Plan        Info
     pulumi:pulumi:Stack         pulumi-issue-pulumi-issue              
 +-  └─ gcp:vpcaccess:Connector  vpc-connector              replace     [diff: ~maxThroughput]

Resources:
    +-1 to replace
    2 unchanged

Do you want to perform this update? details
  pulumi:pulumi:Stack: (same)
    [urn=urn:pulumi:pulumi-issue::pulumi-issue::pulumi:pulumi:Stack::pulumi-issue-pulumi-issue]
    --gcp:vpcaccess/connector:Connector: (delete-replaced)
        [id=projects/pulumi-issue-project/locations/europe-west1/connectors/vpc-connector]
        [urn=urn:pulumi:pulumi-issue::pulumi-issue::gcp:vpcaccess/connector:Connector::vpc-connector]
        [provider=urn:pulumi:pulumi-issue::pulumi-issue::pulumi:providers:gcp::default_6_55_0::6bc29895-67a6-4a36-8603-64148d9ae17f]
    +-gcp:vpcaccess/connector:Connector: (replace)
        [id=projects/pulumi-issue-project/locations/europe-west1/connectors/vpc-connector]
        [urn=urn:pulumi:pulumi-issue::pulumi-issue::gcp:vpcaccess/connector:Connector::vpc-connector]
        [provider=urn:pulumi:pulumi-issue::pulumi-issue::pulumi:providers:gcp::default_6_55_0::6bc29895-67a6-4a36-8603-64148d9ae17f]
      ~ maxThroughput: 700 => 300
    ++gcp:vpcaccess/connector:Connector: (create-replacement)
        [id=projects/pulumi-issue-project/locations/europe-west1/connectors/vpc-connector]
        [urn=urn:pulumi:pulumi-issue::pulumi-issue::gcp:vpcaccess/connector:Connector::vpc-connector]
        [provider=urn:pulumi:pulumi-issue::pulumi-issue::pulumi:providers:gcp::default_6_55_0::6bc29895-67a6-4a36-8603-64148d9ae17f]
      ~ maxThroughput: 700 => 300

Accepting this change doesn't actually change maxThroughput. This can be seen by accepting the change (in which the connector is actually replace, with what appears to be a connector with identical parameters), running pulumi up once again and observing the same preview message.

Output of pulumi about

CLI          
Version      3.65.1
Go Version   go1.20.3
Go Compiler  gc

Plugins
NAME    VERSION
gcp     6.55.0
python  unknown

Host     
OS       darwin
Version  12.2.1
Arch     arm64

This project is written in python: executable='/Users/nichlasr/pulumi-issue/venv/bin/python3' version='3.10.10
'

Current Stack: pulumi-issue

TYPE                               URN
pulumi:pulumi:Stack                urn:pulumi:pulumi-issue::pulumi-issue::pulumi:pulumi:Stack::pulumi-issue-pulumi-issue
pulumi:providers:gcp               urn:pulumi:pulumi-issue::pulumi-issue::pulumi:providers:gcp::default_6_55_0
gcp:projects/service:Service       urn:pulumi:pulumi-issue::pulumi-issue::gcp:projects/service:Service::vpc-api
gcp:vpcaccess/connector:Connector  urn:pulumi:pulumi-issue::pulumi-issue::gcp:vpcaccess/connector:Connector::vpc-connector

Found no pending operations associated with pulumi-issue

Backend        
Name           nichlass-mbp.lan
URL            file://~
User           nichlasr
Organizations  

Dependencies:
NAME        VERSION
pip         23.1.2
pulumi-gcp  6.55.0
setuptools  67.7.2
wheel       0.40.0

Pulumi locates its logs in /var/folders/5h/psqym7hx2wv5fqlkgr0yr8l40000gn/T/ by default

Additional context

No response

Contributing

Vote on this issue by adding a πŸ‘ reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).

thomas11 commented 1 year ago

Hi @lrnq, sorry for the trouble and thank you for the detailed report. Until this is fixed, you might be able to use IgnoreChanges on the throughput property to avoid the diff and the recreation of the resource.

mjeffryes commented 6 days ago

Unfortunately, it looks like this issue hasn't seen any updates in a while. If you're still encountering this problem, could you leave a quick comment to let us know so we can prioritize it?