Closed alamothe closed 3 months ago
@Frassle I'd love your thoughts on this...
Even though these were just warnings, and the import was successful, I had to remove a bunch of conflicting fields for pulumi up
to work (it was giving an error).
I still have no idea how the cluster was actually configured on Google Cloud. Did it use this or did it use that? Pulumi said it couldn't use both, but that was precisely what it imported.
@stack72 This will be based on whatever the Read
method returned from the provider. The engine data flow is a very simple call Read
, run the result of that through Check
warn about any check failures but then save the state and code as returned.
I'd try making a gke cluster and then seeing what provider Read
returns for it.
I also get these conflicts when importing an existing Google GKE cluster.
pulumi import gcp:container/cluster:Cluster my-cluster com-my-dev-760a2504/us-central1/com-my-us-gke-dev
Then execute pulumi up with my code:
error: gcp:container/cluster:Cluster resource 'my-cluster' has a problem: Conflicting configuration arguments: "logging_service": conflicts with cluster_telemetry. Examine values at 'Cluster.LoggingService'.
error: gcp:container/cluster:Cluster resource 'my-cluster' has a problem: Conflicting configuration arguments: "ip_allocation_policy": conflicts with cluster_ipv4_cidr. Examine values at 'Cluster.IpAllocationPolicy'.
error: gcp:container/cluster:Cluster resource 'my-cluster' has a problem: Conflicting configuration arguments: "monitoring_service": conflicts with cluster_telemetry. Examine values at 'Cluster.MonitoringService'.
error: gcp:container/cluster:Cluster resource 'my-cluster' has a problem: Conflicting configuration arguments: "cluster_ipv4_cidr": conflicts with ip_allocation_policy. Examine values at 'Cluster.ClusterIpv4Cidr'.
The only way to resolve these errors is changes to the imported YAML to address the conflict.
For instance ipAllocationPolicy has conflicting entries so I remove the cidr blocks from them.
clusterIpv4CidrBlock: 10.162.0.0/18
clusterSecondaryRangeName: gke-com-my-us-gke-dev-pods-5c803d01
servicesIpv4CidrBlock: 10.162.96.0/19
servicesSecondaryRangeName: gke-com-my-us-gke-dev-services-5c803d01
Then I have to remove
monitoringService: monitoring.googleapis.com/kubernetes
loggingService: logging.googleapis.com/kubernetes
This fixes the conflicts. But it causes state drift against the remote API. As indicated below:
Type Name Plan Info
pulumi:pulumi:Stack k8s-us-dev
└─ gcp:container:Cluster my-cluster [diff: +enableKubernetesAlpha,enableL4IlbSubsetting,enableLegacyAbac,enableShieldedNodes-clusterIpv4Cidr,loggingService,monitoringService,project~__defaults,ipAllocationPol
This issue needs ownership.
I used @jondkelley's fixes and do not experience state drift with remote API; however, it still seems odd that I'd have to manually edit an import from the actual state of the GCP Cluster. How can the current state be in conflict with itself...?
Given this cluster:
import * as gcp from "@pulumi/gcp";
const cluster = new gcp.container.Cluster("primary", {
name: "my-gke-cluster",
location: "us-central1",
removeDefaultNodePool: true,
initialNodeCount: 1,
});
export const clusterId = cluster.id;
I can now do an import without warnings:
Importing (dev2)
View in Browser (Ctrl+O): https://app.pulumi.com/anton-pulumi-corp/pulumi-gcp-844/dev2/updates/2
Type Name Status
pulumi:pulumi:Stack pulumi-gcp-844-dev2
= └─ gcp:container:Cluster c2 imported (0.95s)
Outputs:
clusterId: "projects/pulumi-development/locations/us-central1/clusters/my-gke-cluster"
Resources:
= 1 imported
2 unchanged
Duration: 2s
Please copy the following code into your Pulumi application. Not doing so
will cause Pulumi to report that an update will happen on the next update command.
Please note that the imported resources are marked as protected. To destroy them
you will need to remove the `protect` option and run `pulumi update` *before*
the destroy will take effect.
import * as pulumi from "@pulumi/pulumi";
import * as gcp from "@pulumi/gcp";
const c2 = new gcp.container.Cluster("c2", {
addonsConfig: {
gcePersistentDiskCsiDriverConfig: {
enabled: true,
},
networkPolicyConfig: {
disabled: true,
},
},
clusterIpv4Cidr: "10.80.0.0/14",
clusterTelemetry: {
type: "ENABLED",
},
databaseEncryption: {
state: "DECRYPTED",
},
defaultMaxPodsPerNode: 110,
defaultSnatStatus: {
disabled: false,
},
initialNodeCount: 1,
location: "us-central1",
loggingConfig: {
enableComponents: [
"SYSTEM_COMPONENTS",
"WORKLOADS",
],
},
masterAuth: {
clientCertificateConfig: {
issueClientCertificate: false,
},
},
monitoringConfig: {
advancedDatapathObservabilityConfigs: [{
enableMetrics: false,
enableRelay: false,
}],
enableComponents: ["SYSTEM_COMPONENTS"],
managedPrometheus: {
enabled: true,
},
},
name: "my-gke-cluster",
network: "projects/pulumi-development/global/networks/default",
networkPolicy: {
enabled: false,
provider: "PROVIDER_UNSPECIFIED",
},
networkingMode: "VPC_NATIVE",
nodeLocations: [
"us-central1-b",
"us-central1-c",
"us-central1-a",
],
nodePoolDefaults: {
nodeConfigDefaults: {
loggingVariant: "DEFAULT",
},
},
nodeVersion: "1.29.4-gke.1043002",
notificationConfig: {
pubsub: {
enabled: false,
},
},
podSecurityPolicyConfig: {
enabled: false,
},
privateClusterConfig: {
masterGlobalAccessConfig: {
enabled: false,
},
},
project: "pulumi-development",
protectConfig: {
workloadConfig: {
auditMode: "BASIC",
},
workloadVulnerabilityMode: "WORKLOAD_VULNERABILITY_MODE_UNSPECIFIED",
},
releaseChannel: {
channel: "REGULAR",
},
securityPostureConfig: {
mode: "BASIC",
vulnerabilityMode: "VULNERABILITY_MODE_UNSPECIFIED",
},
serviceExternalIpsConfig: {
enabled: false,
},
subnetwork: "projects/pulumi-development/regions/us-central1/subnetworks/def}, {
protect: true,
});
This is accomplished by dropping out conflicting properties in pulumi-terraform-bridge during import. The dropout is not very intelligent but attempts to resolve conflicts.
Versions:
CLI
Version 3.117.0
Go Version go1.22.3
Go Compiler gc
Plugins
KIND NAME VERSION
resource gcp 7.26.0
language nodejs unknown
Host
OS darwin
Version 14.5
Arch arm64
This project is written in nodejs: executable='/Users/anton/bin/node' version='v18.18.2'
Current Stack: anton-pulumi-corp/pulumi-gcp-844/dev2
TYPE URN
pulumi:pulumi:Stack urn:pulumi:dev2::pulumi-gcp-844::pulumi:pulumi:Stack::pulumi-gcp-844-dev2
pulumi:providers:gcp urn:pulumi:dev2::pulumi-gcp-844::pulumi:providers:gcp::default_7_26_0
gcp:container/cluster:Cluster urn:pulumi:dev2::pulumi-gcp-844::gcp:container/cluster:Cluster::primary
gcp:container/cluster:Cluster urn:pulumi:dev2::pulumi-gcp-844::gcp:container/cluster:Cluster::c2
Found no pending operations associated with dev2
Backend
Name pulumi.com
URL https://app.pulumi.com/anton-pulumi-corp
User anton-pulumi-corp
Organizations anton-pulumi-corp, moolumi, pulumi
Token type personal
Dependencies:
NAME VERSION
@pulumi/pulumi 3.120.0
@types/node 18.19.34
typescript 5.4.5
@pulumi/gcp 7.26.0
Pulumi locates its logs in /var/folders/gd/3ncjb1lj5ljgk8xl5ssn_gvc0000gn/T/com.apple.shortcuts.mac-helper// by default
I will close this as fixed but please feel free to open another issue if something is not working as expected.
What happened?
Pulumi outputs warnings when importing a freshly created GKE cluster from Google Cloud console.
Steps to reproduce
Expected Behavior
No warnings
Actual Behavior
Versions used
Additional context
No response
Contributing
Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).