Open juicemia opened 2 years ago
Just an update, I tried running it again today and this time I couldn't delete the IstioMesh
even before upgrading, so I doubt delete
ever worked at all.
Another update: I tried changing the name of the version
input just to see if I could isolate the issue to that, and sure enough, it seems like that's the problem.
Changed the code to the following:
class IstioMeshProvider(ResourceProvider):
def create(self, props):
version = props['foo']
revision = version.replace('.', '-')
proc = subprocess.run([f"istioctl-{version}", 'install', '-y', '--set', f"revision={revision}"])
if proc.returncode != 0:
raise Exception(f"Unable to install Istio service mesh. \n\nSTDOUT:\n\n{proc.stdout}\n\nSTDERR:\n\n{proc.stderr}")
return CreateResult("istioctl-"+binascii.b2a_hex(os.urandom(16)).decode("utf-8"), outs={
version: version,
revision: revision
})
def delete(self, id, props):
version = props['foo']
revision = version.replace('.', '-')
proc = subprocess.run([f"istioctl-{version}", 'x', 'uninstall', '-y', '--revision', revision])
if proc.returncode != 0:
raise Exception(f"Unable to uninstall Istio service mesh. \n\nSTDOUT:\n\n{proc.stdout}\n\nSTDERR:\n\n{proc.stderr}")
def diff(self, id, old_inputs, new_inputs):
return DiffResult(
changes=old_inputs != new_inputs,
# If we want to do a completely automated upgrade of Istio to a new revision, which includes cleaning up
# the old revision by deleting it completely, then we should set `version` in the `replaces` so that Pulumi
# knows to run the provider's `delete` implementation. However, having the process completely automated can
# be dangerous because there's no verification by a human operator that the new mesh is actually working. If
# we don't want to do a completely automated upgrade, but instead just install a new revision and then manually
# roll over the deployments once the new revision is manually verified to be working, we can do this. This
# `DiffResult` will tell Pulumi to run the `update`, which can then install the new Istio revision and stop there.
replaces=[],
stables=None,
delete_before_replace=False
)
def update(self, id, old_inputs, new_inputs):
version = new_inputs['foo']
revision = version.replace('.', '-')
proc = subprocess.run([f"istioctl-{version}", 'install', '-y', '--set', f"revision={revision}"])
if proc.returncode != 0:
raise Exception(f"Unable to uninstall Istio service mesh. \n\nSTDOUT:\n\n{proc.stdout}\n\nSTDERR:\n\n{proc.stderr}")
return UpdateResult(
outs={
version: version,
revision: revision
}
)
class IstioMeshArgs(object):
# TODO: add args for operator manifest, kubeconfig
foo: Input[str]
def __init__(self, foo):
self.foo = foo
class IstioMesh(Resource):
version: Output[str]
revision: Output[str]
def __init__(self, name: str, args: IstioMeshArgs, opts = None):
full_args = {**vars(args)}
super().__init__(IstioMeshProvider(), name, full_args, opts)
That got me this error:
Diagnostics:
pulumi-python:dynamic:Resource (mesh):
error: Exception calling application: 'foo'
pulumi:pulumi:Stack (quickstart-staging):
E0705 16:33:23.975474000 6119305216 fork_posix.cc:76] Other threads are currently calling into gRPC, skipping fork() handlers
error: update failed
I figured out my issue.
This:
return UpdateResult(
outs={
version: version,
revision: revision
}
)
should be this:
return UpdateResult(
outs={
'version': version,
'revision': revision
}
)
I think there's an opportunity for a better error message here.
Exception calling application: 'version'
doesn't tell the user much.
I started seeing this today with a bad dictionary access as well. It would be nice to have a clearer error message here.
What happened?
I have the following dynamic provider:
I'm calling it like this:
When I run
pulumi up
to create the mesh it works. If I then destroy it it works. I destroy it like this:If I create the mesh with
version
set to'1.13.5'
, then change the version of the mesh to'1.14.1'
, the update works. However, If I runpulumi destroy -t urn:pulumi:staging::quickstart::pulumi-python:dynamic:Resource::mesh
it fails with the following output:I would expect the destroy to work the same way it works if I don't trigger an
update
first.Steps to reproduce
istioctl-1.13.5
andistioctl-1.14.1
in your$PATH
.1.14.1
and comment the line for1.13.5
, then runpulumi up
to update the Istio version on the Kubernetes cluster.pulumi destroy -t $MESH_URN
to destroy the mesh.Expected Behavior
The
mesh
resource is deleted along with the underlying Istio installation.Actual Behavior
The destroy operation fails with this output:
Versions used
Additional context
I just started playing with Pulumi over the July 4th weekend. This isn't production code but I'm exploring moving off of Terraform for managing our Kubernetes cluster because managing Istio is a pain.
Contributing
Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).