Closed nabadger closed 3 months ago
Hi @nabadger,
I think the main reason here is that the pre-migration CR didn't have status.workspaceID
. Could you please check your backup file(I hope you made it)?
The workaround here would be to patch the status with the workspace ID and let the operator reconcile it:
$ kubectl patch workspace <NAME> --subresource='status' --type='merge' -p '{"status":{"workspaceID": "ws-XXX"}}'
ws-XXX is your target workspace. Heads up(!!), once the operator reconciles the workspace it will overwrite all changes that do not match the manifest.
Please let me know if that helps.
Thanks!
Ah perfect, that worked - thanks @arybolovlev
I did try a similar fix with kubectl edit
but I think status cannot be set that way (kubectl patch
did the trick) :)
Operator Version, Kind and Kubernetes Version
YAML Manifest File
During testing of v1 to v2 operator upgrade, I have a Workspace which is not being reconciled with the v2 operator.
This particular workspace already exists in terraform-cloud and was created by the v1 operator. Potentially this workspace was in a bad state with the v1 operator (in that maybe it also didn't have a status set at that point, but that's just a guess).
I don't see this issue on other workspaces...
In the current state, there's 2 concerns I have:
status.workspaceID
field. There is nostatus
field at all when I inspect this withkubectl get workspace <name> -o yaml
Some of this sounds similar to https://github.com/hashicorp/terraform-cloud-operator/issues/214
Expected Behavior
I think I would expect it to set the status and not error.
Actual Behavior
It's not setting the status and it's throwing an error.
Additional Context
References
Community Note