Open antifuchs opened 2 years ago
This same thing happens with helm.Chart
and has been a major driver for getting us onto helm.Release
, even though the latter is in beta and warns loudly that one should not use it in production. If only we had the option!
This same thing happens with
helm.Chart
and has been a major driver for getting us ontohelm.Release
, even though the latter is in beta and warns loudly that one should not use it in production. If only we had the option!
helm.Release is now GA: https://www.pulumi.com/blog/helm-release-resource-for-kubernetes-generally-available/
Thanks for the update - glad that Release is GA now, but it doesn't fully fix our issue: Many systems will encourage users to install their CRDs outside the helm chart to make upgrades smoother (like the cert-manager chart above), and that forces us in between a rock and a hard place.
HOWEVER. I found a work-around, which I'm not super happy with (it relies on the intricacies of pulumi URNs and component resources). So far, it's the only thing that let me add a yaml.ConfigFile
to more than one cluster in a single pulumi code base:
class AWSRegion(util.MzComponentResource):
"""A pseudo-parent that yields unique-per-region URIs for
kubernetes yaml.ConfigFile and helm.Chart resources."""
def __init__(
self,
name: str,
region: str,
opts: Optional[pulumi.ResourceOptions] = None,
) -> None:
super().__init__(f"project:index:AWSRegion_{region}", name, None, opts)
super().register_outputs({"initialized": True})
You instantiate one per region:
parent_east=AWSRegion(name="east", region="us-east-1")
parent_west=AWSRegion(name="west", region="eu-west-1")
Then, you use this like so:
k8s.yaml.ConfigFile(
f"cert-manager-crds-east",
file=f"https://github.com/jetstack/cert-manager/releases/download/{cert_manager_version}/cert-manager.crds.yaml",
opts=pulumi.ResourceOptions(
provider=provider_east,
parent=parent_east,
),
)
...and get to add one CRD definition per kubernetes cluster.
The reason this works is the f"project:index:AWSRegion_{region}"
string above: it' makes the "type" of the component resource contain the region, and that means that every "child" (and grand-child, etc) resource contains the region name on its URNs, automatically making them unique. We wouldn't even need to put the region name on the ConfigFile's resource name, but eh - it's easier to identify that way.
Hello!
Issue details
It's impossible to
pulumi_kubernetes.yaml.ConfigFile
in multiple k8s clusters with the same state/config. Without passingresource_prefix
, a single config can get imported - any other configuration will run into URI collisions (no matter how distinct one makes the ConfigFile's resource name).If you do pass
resource_prefix
however, something worse happens: Pulumi not only prefixes its own URIs, but also changes the k8s metadata names, which - again - leads to configuration that doesn't apply.Steps to reproduce
east_provider
,west_provider
k8s.yaml.ConfigFile( f"cert-manager-crds-east", file=f"https://github.com/jetstack/cert-manager/releases/download/{cert_manager_version}/cert-manager.crds.yaml", resource_prefix="east", opts=pulumi.ResourceOptions(provider=east_provider), ) k8s.yaml.ConfigFile( f"cert-manager-crds-west", file=f"https://github.com/jetstack/cert-manager/releases/download/{cert_manager_version}/cert-manager.crds.yaml", resource_prefix="west", opts=pulumi.ResourceOptions(provider=west_provider), )
Expected: One of these two methods to apply the correct definition with k8s metadata names that I can use.. Actual: the above.