pulumi / pulumi-gcp

A Google Cloud Platform (GCP) Pulumi resource package, providing multi-language access to GCP
Apache License 2.0
183 stars 53 forks source link

PANIC in docs #1967

Open t0yv0 opened 6 months ago

t0yv0 commented 6 months ago

What happened?

Documentation has rendered PANIC but shouldn't. Similarly to https://github.com/pulumi/pulumi-random/pull/890 this likely can be fixed by updating dependencies of aux providers in .ci-mgmt.yml.

Example

git grep PANIC:

provider/cmd/pulumi-resource-gcp/schema.json:            "description": "A Google Vmware Node Pool.\n\n\n\n## Example Usage\n\n### Gkeonprem Vmware Node Pool Basic\n\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as gcp from \"@pulumi/gcp\";\n\nconst default_basic = new gcp.gkeonprem.VMwareCluster(\"default-basic\", {\n    name: \"my-cluster\",\n    location: \"us-west1\",\n    adminClusterMembership: \"projects/870316890899/locations/global/memberships/gkeonprem-terraform-test\",\n    description: \"test cluster\",\n    onPremVersion: \"1.13.1-gke.35\",\n    networkConfig: {\n        serviceAddressCidrBlocks: [\"10.96.0.0/12\"],\n        podAddressCidrBlocks: [\"192.168.0.0/16\"],\n        dhcpIpConfig: {\n            enabled: true,\n        },\n    },\n    controlPlaneNode: {\n        cpus: 4,\n        memory: 8192,\n        replicas: 1,\n    },\n    loadBalancer: {\n        vipConfig: {\n            controlPlaneVip: \"10.251.133.5\",\n            ingressVip: \"10.251.135.19\",\n        },\n        metalLbConfig: {\n            addressPools: [\n                {\n                    pool: \"ingress-ip\",\n                    manualAssign: true,\n                    addresses: [\"10.251.135.19\"],\n                },\n                {\n                    pool: \"lb-test-ip\",\n                    manualAssign: true,\n                    addresses: [\"10.251.135.19\"],\n                },\n            ],\n        },\n    },\n});\nconst nodepool_basic = new gcp.gkeonprem.VMwareNodePool(\"nodepool-basic\", {\n    name: \"my-nodepool\",\n    location: \"us-west1\",\n    vmwareCluster: default_basic.name,\n    config: {\n        replicas: 3,\n        imageType: \"ubuntu_containerd\",\n        enableLoadBalancer: true,\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_gcp as gcp\n\ndefault_basic = gcp.gkeonprem.VMwareCluster(\"default-basic\",\n    name=\"my-cluster\",\n    location=\"us-west1\",\n    admin_cluster_membership=\"projects/870316890899/locations/global/memberships/gkeonprem-terraform-test\",\n    description=\"test cluster\",\n    on_prem_version=\"1.13.1-gke.35\",\n    network_config=gcp.gkeonprem.VMwareClusterNetworkConfigArgs(\n        service_address_cidr_blocks=[\"10.96.0.0/12\"],\n        pod_address_cidr_blocks=[\"192.168.0.0/16\"],\n        dhcp_ip_config=gcp.gkeonprem.VMwareClusterNetworkConfigDhcpIpConfigArgs(\n            enabled=True,\n        ),\n    ),\n    control_plane_node=gcp.gkeonprem.VMwareClusterControlPlaneNodeArgs(\n        cpus=4,\n        memory=8192,\n        replicas=1,\n    ),\n    load_balancer=gcp.gkeonprem.VMwareClusterLoadBalancerArgs(\n        vip_config=gcp.gkeonprem.VMwareClusterLoadBalancerVipConfigArgs(\n            control_plane_vip=\"10.251.133.5\",\n            ingress_vip=\"10.251.135.19\",\n        ),\n        metal_lb_config=gcp.gkeonprem.VMwareClusterLoadBalancerMetalLbConfigArgs(\n            address_pools=[\n                gcp.gkeonprem.VMwareClusterLoadBalancerMetalLbConfigAddressPoolArgs(\n                    pool=\"ingress-ip\",\n                    manual_assign=True,\n                    addresses=[\"10.251.135.19\"],\n                ),\n                gcp.gkeonprem.VMwareClusterLoadBalancerMetalLbConfigAddressPoolArgs(\n                    pool=\"lb-test-ip\",\n                    manual_assign=True,\n                    addresses=[\"10.251.135.19\"],\n                ),\n            ],\n        ),\n    ))\nnodepool_basic = gcp.gkeonprem.VMwareNodePool(\"nodepool-basic\",\n    name=\"my-nodepool\",\n    location=\"us-west1\",\n    vmware_cluster=default_basic.name,\n    config=gcp.gkeonprem.VMwareNodePoolConfigArgs(\n        replicas=3,\n        image_type=\"ubuntu_containerd\",\n        enable_load_balancer=True,\n    ))\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Gcp = Pulumi.Gcp;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var default_basic = new Gcp.GkeOnPrem.VMwareCluster(\"default-basic\", new()\n    {\n        Name = \"my-cluster\",\n        Location = \"us-west1\",\n        AdminClusterMembership = \"projects/870316890899/locations/global/memberships/gkeonprem-terraform-test\",\n        Description = \"test cluster\",\n        OnPremVersion = \"1.13.1-gke.35\",\n        NetworkConfig = new Gcp.GkeOnPrem.Inputs.VMwareClusterNetworkConfigArgs\n        {\n            ServiceAddressCidrBlocks = new[]\n            {\n                \"10.96.0.0/12\",\n            },\n            PodAddressCidrBlocks = new[]\n            {\n                \"192.168.0.0/16\",\n            },\n            DhcpIpConfig = new Gcp.GkeOnPrem.Inputs.VMwareClusterNetworkConfigDhcpIpConfigArgs\n            {\n                Enabled = true,\n            },\n        },\n        ControlPlaneNode = new Gcp.GkeOnPrem.Inputs.VMwareClusterControlPlaneNodeArgs\n        {\n            Cpus = 4,\n            Memory = 8192,\n            Replicas = 1,\n        },\n        LoadBalancer = new Gcp.GkeOnPrem.Inputs.VMwareClusterLoadBalancerArgs\n        {\n            VipConfig = new Gcp.GkeOnPrem.Inputs.VMwareClusterLoadBalancerVipConfigArgs\n            {\n                ControlPlaneVip = \"10.251.133.5\",\n                IngressVip = \"10.251.135.19\",\n            },\n            MetalLbConfig = new Gcp.GkeOnPrem.Inputs.VMwareClusterLoadBalancerMetalLbConfigArgs\n            {\n                AddressPools = new[]\n                {\n                    new Gcp.GkeOnPrem.Inputs.VMwareClusterLoadBalancerMetalLbConfigAddressPoolArgs\n                    {\n                        Pool = \"ingress-ip\",\n                        ManualAssign = true,\n                        Addresses = new[]\n                        {\n                            \"10.251.135.19\",\n                        },\n                    },\n                    new Gcp.GkeOnPrem.Inputs.VMwareClusterLoadBalancerMetalLbConfigAddressPoolArgs\n                    {\n                        Pool = \"lb-test-ip\",\n                        ManualAssign = true,\n                        Addresses = new[]\n                        {\n                            \"10.251.135.19\",\n                        },\n                    },\n                },\n            },\n        },\n    });\n\n    var nodepool_basic = new Gcp.GkeOnPrem.VMwareNodePool(\"nodepool-basic\", new()\n    {\n        Name = \"my-nodepool\",\n        Location = \"us-west1\",\n        VmwareCluster = default_basic.Name,\n        Config = new Gcp.GkeOnPrem.Inputs.VMwareNodePoolConfigArgs\n        {\n            Replicas = 3,\n            ImageType = \"ubuntu_containerd\",\n            EnableLoadBalancer = true,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-gcp/sdk/v7/go/gcp/gkeonprem\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := gkeonprem.NewVMwareCluster(ctx, \"default-basic\", \u0026gkeonprem.VMwareClusterArgs{\n\t\t\tName:                   pulumi.String(\"my-cluster\"),\n\t\t\tLocation:               pulumi.String(\"us-west1\"),\n\t\t\tAdminClusterMembership: pulumi.String(\"projects/870316890899/locations/global/memberships/gkeonprem-terraform-test\"),\n\t\t\tDescription:            pulumi.String(\"test cluster\"),\n\t\t\tOnPremVersion:          pulumi.String(\"1.13.1-gke.35\"),\n\t\t\tNetworkConfig: \u0026gkeonprem.VMwareClusterNetworkConfigArgs{\n\t\t\t\tServiceAddressCidrBlocks: pulumi.StringArray{\n\t\t\t\t\tpulumi.String(\"10.96.0.0/12\"),\n\t\t\t\t},\n\t\t\t\tPodAddressCidrBlocks: pulumi.StringArray{\n\t\t\t\t\tpulumi.String(\"192.168.0.0/16\"),\n\t\t\t\t},\n\t\t\t\tDhcpIpConfig: \u0026gkeonprem.VMwareClusterNetworkConfigDhcpIpConfigArgs{\n\t\t\t\t\tEnabled: pulumi.Bool(true),\n\t\t\t\t},\n\t\t\t},\n\t\t\tControlPlaneNode: \u0026gkeonprem.VMwareClusterControlPlaneNodeArgs{\n\t\t\t\tCpus:     pulumi.Int(4),\n\t\t\t\tMemory:   pulumi.Int(8192),\n\t\t\t\tReplicas: pulumi.Int(1),\n\t\t\t},\n\t\t\tLoadBalancer: \u0026gkeonprem.VMwareClusterLoadBalancerArgs{\n\t\t\t\tVipConfig: \u0026gkeonprem.VMwareClusterLoadBalancerVipConfigArgs{\n\t\t\t\t\tControlPlaneVip: pulumi.String(\"10.251.133.5\"),\n\t\t\t\t\tIngressVip:      pulumi.String(\"10.251.135.19\"),\n\t\t\t\t},\n\t\t\t\tMetalLbConfig: \u0026gkeonprem.VMwareClusterLoadBalancerMetalLbConfigArgs{\n\t\t\t\t\tAddressPools: gkeonprem.VMwareClusterLoadBalancerMetalLbConfigAddressPoolArray{\n\t\t\t\t\t\t\u0026gkeonprem.VMwareClusterLoadBalancerMetalLbConfigAddressPoolArgs{\n\t\t\t\t\t\t\tPool:         pulumi.String(\"ingress-ip\"),\n\t\t\t\t\t\t\tManualAssign: pulumi.Bool(true),\n\t\t\t\t\t\t\tAddresses: pulumi.StringArray{\n\t\t\t\t\t\t\t\tpulumi.String(\"10.251.135.19\"),\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\u0026gkeonprem.VMwareClusterLoadBalancerMetalLbConfigAddressPoolArgs{\n\t\t\t\t\t\t\tPool:         pulumi.String(\"lb-test-ip\"),\n\t\t\t\t\t\t\tManualAssign: pulumi.Bool(true),\n\t\t\t\t\t\t\tAddresses: pulumi.StringArray{\n\t\t\t\t\t\t\t\tpulumi.String(\"10.251.135.19\"),\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = gkeonprem.NewVMwareNodePool(ctx, \"nodepool-basic\", \u0026gkeonprem.VMwareNodePoolArgs{\n\t\t\tName:          pulumi.String(\"my-nodepool\"),\n\t\t\tLocation:      pulumi.String(\"us-west1\"),\n\t\t\tVmwareCluster: default_basic.Name,\n\t\t\tConfig: \u0026gkeonprem.VMwareNodePoolConfigArgs{\n\t\t\t\tReplicas:           pulumi.Int(3),\n\t\t\t\tImageType:          pulumi.String(\"ubuntu_containerd\"),\n\t\t\t\tEnableLoadBalancer: pulumi.Bool(true),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.gcp.gkeonprem.VMwareCluster;\nimport com.pulumi.gcp.gkeonprem.VMwareClusterArgs;\nimport com.pulumi.gcp.gkeonprem.inputs.VMwareClusterNetworkConfigArgs;\nimport com.pulumi.gcp.gkeonprem.inputs.VMwareClusterNetworkConfigDhcpIpConfigArgs;\nimport com.pulumi.gcp.gkeonprem.inputs.VMwareClusterControlPlaneNodeArgs;\nimport com.pulumi.gcp.gkeonprem.inputs.VMwareClusterLoadBalancerArgs;\nimport com.pulumi.gcp.gkeonprem.inputs.VMwareClusterLoadBalancerVipConfigArgs;\nimport com.pulumi.gcp.gkeonprem.inputs.VMwareClusterLoadBalancerMetalLbConfigArgs;\nimport com.pulumi.gcp.gkeonprem.VMwareNodePool;\nimport com.pulumi.gcp.gkeonprem.VMwareNodePoolArgs;\nimport com.pulumi.gcp.gkeonprem.inputs.VMwareNodePoolConfigArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var default_basic = new VMwareCluster(\"default-basic\", VMwareClusterArgs.builder()        \n            .name(\"my-cluster\")\n            .location(\"us-west1\")\n            .adminClusterMembership(\"projects/870316890899/locations/global/memberships/gkeonprem-terraform-test\")\n            .description(\"test cluster\")\n            .onPremVersion(\"1.13.1-gke.35\")\n            .networkConfig(VMwareClusterNetworkConfigArgs.builder()\n                .serviceAddressCidrBlocks(\"10.96.0.0/12\")\n                .podAddressCidrBlocks(\"192.168.0.0/16\")\n                .dhcpIpConfig(VMwareClusterNetworkConfigDhcpIpConfigArgs.builder()\n                    .enabled(true)\n                    .build())\n                .build())\n            .controlPlaneNode(VMwareClusterControlPlaneNodeArgs.builder()\n                .cpus(4)\n                .memory(8192)\n                .replicas(1)\n                .build())\n            .loadBalancer(VMwareClusterLoadBalancerArgs.builder()\n                .vipConfig(VMwareClusterLoadBalancerVipConfigArgs.builder()\n                    .controlPlaneVip(\"10.251.133.5\")\n                    .ingressVip(\"10.251.135.19\")\n                    .build())\n                .metalLbConfig(VMwareClusterLoadBalancerMetalLbConfigArgs.builder()\n                    .addressPools(                    \n                        VMwareClusterLoadBalancerMetalLbConfigAddressPoolArgs.builder()\n                            .pool(\"ingress-ip\")\n                            .manualAssign(\"true\")\n                            .addresses(\"10.251.135.19\")\n                            .build(),\n                        VMwareClusterLoadBalancerMetalLbConfigAddressPoolArgs.builder()\n                            .pool(\"lb-test-ip\")\n                            .manualAssign(\"true\")\n                            .addresses(\"10.251.135.19\")\n                            .build())\n                    .build())\n                .build())\n            .build());\n\n        var nodepool_basic = new VMwareNodePool(\"nodepool-basic\", VMwareNodePoolArgs.builder()        \n            .name(\"my-nodepool\")\n            .location(\"us-west1\")\n            .vmwareCluster(default_basic.name())\n            .config(VMwareNodePoolConfigArgs.builder()\n                .replicas(3)\n                .imageType(\"ubuntu_containerd\")\n                .enableLoadBalancer(true)\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  default-basic:\n    type: gcp:gkeonprem:VMwareCluster\n    properties:\n      name: my-cluster\n      location: us-west1\n      adminClusterMembership: projects/870316890899/locations/global/memberships/gkeonprem-terraform-test\n      description: test cluster\n      onPremVersion: 1.13.1-gke.35\n      networkConfig:\n        serviceAddressCidrBlocks:\n          - 10.96.0.0/12\n        podAddressCidrBlocks:\n          - 192.168.0.0/16\n        dhcpIpConfig:\n          enabled: true\n      controlPlaneNode:\n        cpus: 4\n        memory: 8192\n        replicas: 1\n      loadBalancer:\n        vipConfig:\n          controlPlaneVip: 10.251.133.5\n          ingressVip: 10.251.135.19\n        metalLbConfig:\n          addressPools:\n            - pool: ingress-ip\n              manualAssign: 'true'\n              addresses:\n                - 10.251.135.19\n            - pool: lb-test-ip\n              manualAssign: 'true'\n              addresses:\n                - 10.251.135.19\n  nodepool-basic:\n    type: gcp:gkeonprem:VMwareNodePool\n    properties:\n      name: my-nodepool\n      location: us-west1\n      vmwareCluster: ${[\"default-basic\"].name}\n      config:\n        replicas: 3\n        imageType: ubuntu_containerd\n        enableLoadBalancer: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n### Gkeonprem Vmware Node Pool Full\n\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.gcp.gkeonprem.VMwareCluster;\nimport com.pulumi.gcp.gkeonprem.VMwareClusterArgs;\nimport com.pulumi.gcp.gkeonprem.inputs.VMwareClusterNetworkConfigArgs;\nimport com.pulumi.gcp.gkeonprem.inputs.VMwareClusterNetworkConfigDhcpIpConfigArgs;\nimport com.pulumi.gcp.gkeonprem.inputs.VMwareClusterControlPlaneNodeArgs;\nimport com.pulumi.gcp.gkeonprem.inputs.VMwareClusterLoadBalancerArgs;\nimport com.pulumi.gcp.gkeonprem.inputs.VMwareClusterLoadBalancerVipConfigArgs;\nimport com.pulumi.gcp.gkeonprem.inputs.VMwareClusterLoadBalancerMetalLbConfigArgs;\nimport com.pulumi.gcp.gkeonprem.VMwareNodePool;\nimport com.pulumi.gcp.gkeonprem.VMwareNodePoolArgs;\nimport com.pulumi.gcp.gkeonprem.inputs.VMwareNodePoolConfigArgs;\nimport com.pulumi.gcp.gkeonprem.inputs.VMwareNodePoolNodePoolAutoscalingArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var default_full = new VMwareCluster(\"default-full\", VMwareClusterArgs.builder()        \n            .name(\"my-cluster\")\n            .location(\"us-west1\")\n            .adminClusterMembership(\"projects/870316890899/locations/global/memberships/gkeonprem-terraform-test\")\n            .description(\"test cluster\")\n            .onPremVersion(\"1.13.1-gke.35\")\n            .networkConfig(VMwareClusterNetworkConfigArgs.builder()\n                .serviceAddressCidrBlocks(\"10.96.0.0/12\")\n                .podAddressCidrBlocks(\"192.168.0.0/16\")\n                .dhcpIpConfig(VMwareClusterNetworkConfigDhcpIpConfigArgs.builder()\n                    .enabled(true)\n                    .build())\n                .build())\n            .controlPlaneNode(VMwareClusterControlPlaneNodeArgs.builder()\n                .cpus(4)\n                .memory(8192)\n                .replicas(1)\n                .build())\n            .loadBalancer(VMwareClusterLoadBalancerArgs.builder()\n                .vipConfig(VMwareClusterLoadBalancerVipConfigArgs.builder()\n                    .controlPlaneVip(\"10.251.133.5\")\n                    .ingressVip(\"10.251.135.19\")\n                    .build())\n                .metalLbConfig(VMwareClusterLoadBalancerMetalLbConfigArgs.builder()\n                    .addressPools(                    \n                        VMwareClusterLoadBalancerMetalLbConfigAddressPoolArgs.builder()\n                            .pool(\"ingress-ip\")\n                            .manualAssign(\"true\")\n                            .addresses(\"10.251.135.19\")\n                            .build(),\n                        VMwareClusterLoadBalancerMetalLbConfigAddressPoolArgs.builder()\n                            .pool(\"lb-test-ip\")\n                            .manualAssign(\"true\")\n                            .addresses(\"10.251.135.19\")\n                            .build())\n                    .build())\n                .build())\n            .build());\n\n        var nodepool_full = new VMwareNodePool(\"nodepool-full\", VMwareNodePoolArgs.builder()        \n            .name(\"my-nodepool\")\n            .location(\"us-west1\")\n            .vmwareCluster(default_full.name())\n            .annotations()\n            .config(VMwareNodePoolConfigArgs.builder()\n                .cpus(4)\n                .memoryMb(8196)\n                .replicas(3)\n                .imageType(\"ubuntu_containerd\")\n                .image(\"image\")\n                .bootDiskSizeGb(10)\n                .taints(                \n                    VMwareNodePoolConfigTaintArgs.builder()\n                        .key(\"key\")\n                        .value(\"value\")\n                        .build(),\n                    VMwareNodePoolConfigTaintArgs.builder()\n                        .key(\"key\")\n                        .value(\"value\")\n                        .effect(\"NO_SCHEDULE\")\n                        .build())\n                .labels()\n                .vsphereConfig(%!v(PANIC=Format method: runtime error: invalid memory address or nil pointer dereference))\n                .enableLoadBalancer(true)\n                .build())\n            .nodePoolAutoscaling(VMwareNodePoolNodePoolAutoscalingArgs.builder()\n                .minReplicas(1)\n                .maxReplicas(5)\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  default-full:\n    type: gcp:gkeonprem:VMwareCluster\n    properties:\n      name: my-cluster\n      location: us-west1\n      adminClusterMembership: projects/870316890899/locations/global/memberships/gkeonprem-terraform-test\n      description: test cluster\n      onPremVersion: 1.13.1-gke.35\n      networkConfig:\n        serviceAddressCidrBlocks:\n          - 10.96.0.0/12\n        podAddressCidrBlocks:\n          - 192.168.0.0/16\n        dhcpIpConfig:\n          enabled: true\n      controlPlaneNode:\n        cpus: 4\n        memory: 8192\n        replicas: 1\n      loadBalancer:\n        vipConfig:\n          controlPlaneVip: 10.251.133.5\n          ingressVip: 10.251.135.19\n        metalLbConfig:\n          addressPools:\n            - pool: ingress-ip\n              manualAssign: 'true'\n              addresses:\n                - 10.251.135.19\n            - pool: lb-test-ip\n              manualAssign: 'true'\n              addresses:\n                - 10.251.135.19\n  nodepool-full:\n    type: gcp:gkeonprem:VMwareNodePool\n    properties:\n      name: my-nodepool\n      location: us-west1\n      vmwareCluster: ${[\"default-full\"].name}\n      annotations: {}\n      config:\n        cpus: 4\n        memoryMb: 8196\n        replicas: 3\n        imageType: ubuntu_containerd\n        image: image\n        bootDiskSizeGb: 10\n        taints:\n          - key: key\n            value: value\n          - key: key\n            value: value\n            effect: NO_SCHEDULE\n        labels: {}\n        vsphereConfig:\n          datastore: test-datastore\n          tags:\n            - category: test-category-1\n              tag: tag-1\n            - category: test-category-2\n              tag: tag-2\n          hostGroups:\n            - host1\n            - host2\n        enableLoadBalancer: true\n      nodePoolAutoscaling:\n        minReplicas: 1\n        maxReplicas: 5\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Import\n\nVmwareNodePool can be imported using any of these accepted formats:\n\n* `projects/{{project}}/locations/{{location}}/vmwareClusters/{{vmware_cluster}}/vmwareNodePools/{{name}}`\n\n* `{{project}}/{{location}}/{{vmware_cluster}}/{{name}}`\n\n* `{{location}}/{{vmware_cluster}}/{{name}}`\n\nWhen using the `pulumi import` command, VmwareNodePool can be imported using one of the formats above. For example:\n\n```sh\n$ pulumi import gcp:gkeonprem/vMwareNodePool:VMwareNodePool default projects/{{project}}/locations/{{location}}/vmwareClusters/{{vmware_cluster}}/vmwareNodePools/{{name}}\n```\n\n```sh\n$ pulumi import gcp:gkeonprem/vMwareNodePool:VMwareNodePool default {{project}}/{{location}}/{{vmware_cluster}}/{{name}}\n```\n\n```sh\n$ pulumi import gcp:gkeonprem/vMwareNodePool:VMwareNodePool default {{location}}/{{vmware_cluster}}/{{name}}\n```\n\n",
provider/cmd/pulumi-resource-gcp/schema.json:            "description": "Get OpenID userinfo about the credentials used with the Google provider,\nspecifically the email.\n\nThis datasource enables you to export the email of the account you've\nauthenticated the provider with; this can be used alongside\n`data.google_client_config`'s `access_token` to perform OpenID Connect\nauthentication with GKE and configure an RBAC role for the email used.\n\n\u003e This resource will only work as expected if the provider is configured to\nuse the `https://www.googleapis.com/auth/userinfo.email` scope! You will\nreceive an error otherwise. The provider uses this scope by default.\n\n## Example Usage\n\n### Exporting An Email\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as gcp from \"@pulumi/gcp\";\n\nexport = async () =\u003e {\n    const me = await gcp.organizations.getClientOpenIdUserInfo({});\n    return {\n        \"my-email\": me.email,\n    };\n}\n```\n```python\nimport pulumi\nimport pulumi_gcp as gcp\n\nme = gcp.organizations.get_client_open_id_user_info()\npulumi.export(\"my-email\", me.email)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Gcp = Pulumi.Gcp;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var me = Gcp.Organizations.GetClientOpenIdUserInfo.Invoke();\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"my-email\"] = me.Apply(getClientOpenIdUserInfoResult =\u003e getClientOpenIdUserInfoResult.Email),\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-gcp/sdk/v7/go/gcp/organizations\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tme, err := organizations.GetClientOpenIdUserInfo(ctx, nil, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"my-email\", me.Email)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.gcp.organizations.OrganizationsFunctions;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var me = OrganizationsFunctions.getClientOpenIdUserInfo();\n\n        ctx.export(\"my-email\", me.applyValue(getClientOpenIdUserInfoResult -\u003e getClientOpenIdUserInfoResult.email()));\n    }\n}\n```\n```yaml\nvariables:\n  me:\n    fn::invoke:\n      Function: gcp:organizations:getClientOpenIdUserInfo\n      Arguments: {}\noutputs:\n  my-email: ${me.email}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### OpenID Connect W/ Kubernetes Provider + RBAC IAM Role\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.gcp.organizations.OrganizationsFunctions;\nimport com.pulumi.gcp.container.ContainerFunctions;\nimport com.pulumi.gcp.container.inputs.GetClusterArgs;\nimport com.pulumi.kubernetes.rbac.authorization.k8s.io_v1.ClusterRoleBinding;\nimport com.pulumi.kubernetes.rbac.authorization.k8s.io_v1.ClusterRoleBindingArgs;\nimport com.pulumi.kubernetes.meta_v1.inputs.ObjectMetaArgs;\nimport com.pulumi.kubernetes.rbac.authorization.k8s.io_v1.inputs.RoleRefArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var providerIdentity = OrganizationsFunctions.getClientOpenIdUserInfo();\n\n        final var provider = OrganizationsFunctions.getClientConfig();\n\n        final var myCluster = ContainerFunctions.getCluster(GetClusterArgs.builder()\n            .name(\"my-cluster\")\n            .zone(\"us-east1-a\")\n            .build());\n\n        var user = new ClusterRoleBinding(\"user\", ClusterRoleBindingArgs.builder()        \n            .metadata(ObjectMetaArgs.builder()\n                .name(\"provider-user-admin\")\n                .build())\n            .roleRef(RoleRefArgs.builder()\n                .apiGroup(\"rbac.authorization.k8s.io\")\n                .kind(\"ClusterRole\")\n                .name(\"cluster-admin\")\n                .build())\n            .subject(%!v(PANIC=Format method: runtime error: invalid memory address or nil pointer dereference))\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  user:\n    type: kubernetes:rbac.authorization.k8s.io/v1:ClusterRoleBinding\n    properties:\n      metadata:\n        name: provider-user-admin\n      roleRef:\n        apiGroup: rbac.authorization.k8s.io\n        kind: ClusterRole\n        name: cluster-admin\n      subject:\n        - kind: User\n          name: ${providerIdentity.email}\nvariables:\n  providerIdentity:\n    fn::invoke:\n      Function: gcp:organizations:getClientOpenIdUserInfo\n      Arguments: {}\n  provider:\n    fn::invoke:\n      Function: gcp:organizations:getClientConfig\n      Arguments: {}\n  myCluster:\n    fn::invoke:\n      Function: gcp:container:getCluster\n      Arguments:\n        name: my-cluster\n        zone: us-east1-a\n```\n\u003c!--End PulumiCodeChooser --\u003e\n",
sdk/java/src/main/java/com/pulumi/gcp/gkeonprem/VMwareNodePool.java: *                 .vsphereConfig(%!v(PANIC=Format method: runtime error: invalid memory address or nil pointer dereference))
sdk/java/src/main/java/com/pulumi/gcp/organizations/OrganizationsFunctions.java:     *             .subject(%!v(PANIC=Format method: runtime error: invalid memory address or nil pointer dereference))
sdk/java/src/main/java/com/pulumi/gcp/organizations/OrganizationsFunctions.java:     *             .subject(%!v(PANIC=Format method: runtime error: invalid memory address or nil pointer dereference))
sdk/java/src/main/java/com/pulumi/gcp/organizations/OrganizationsFunctions.java:     *             .subject(%!v(PANIC=Format method: runtime error: invalid memory address or nil pointer dereference))
sdk/java/src/main/java/com/pulumi/gcp/organizations/OrganizationsFunctions.java:     *             .subject(%!v(PANIC=Format method: runtime error: invalid memory address or nil pointer dereference))
sdk/java/src/main/java/com/pulumi/gcp/organizations/OrganizationsFunctions.java:     *             .subject(%!v(PANIC=Format method: runtime error: invalid memory address or nil pointer dereference))
sdk/java/src/main/java/com/pulumi/gcp/organizations/OrganizationsFunctions.java:     *             .subject(%!v(PANIC=Format method: runtime error: invalid memory address or nil pointer dereference))

Output of pulumi about

N/A

Additional context

N/A

Contributing

Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).

t0yv0 commented 6 months ago

AWS example: https://github.com/pulumi/pulumi-aws/issues/3885 - it can be not so trivial to completely eliminate.

guineveresaenger commented 6 months ago

Thank you for reporting and for linking a likely fix!