spinkube / spin-plugin-kube

A Spin plugin for interacting with Kubernetes.
Other
27 stars 7 forks source link

`spin k8s scaffold`: Two images in different repositories result in the same manifest #24

Closed bacongobbler closed 8 months ago

bacongobbler commented 8 months ago

The result of the following two commands end in the same SpinApp name in the manifest:

><> spin k8s scaffold --from bacongobbler/hello-rust:latest
apiVersion: core.spinoperator.dev/v1
kind: SpinApp
metadata:
  name: hello-rust
spec:
  image: "bacongobbler/hello-rust:latest"
  replicas: 2
  executor: containerd-shim-spin
><> spin k8s scaffold --from rajatjindal/hello-rust:latest
apiVersion: core.spinoperator.dev/v1
kind: SpinApp
metadata:
  name: hello-rust
spec:
  image: "rajatjindal/hello-rust:latest"
  replicas: 2
  executor: containerd-shim-spin

If you ran kubectl apply with either of these manifests, you'd end with the same SpinApp resource: hello-rust.

Is this an issue we need to solve? If so, how?

bacongobbler commented 8 months ago

Paraphrasing a Notion comment thread with @lann:

Maybe scaffold should check for an existing app with the given name and bail if found?

What if the user wanted to update their application? A common workflow is this:

spin k8s scaffold -f bacongobbler/hello-rust:1.0.0 | kubectl apply -f -

If I wanted to update my app to 2.0.0, then I’d run the following:

spin k8s scaffold -f bacongobbler/hello-rust:2.0.0 | kubectl apply -f -

If we were to check for the same app name in the cluster, the second invocation would fail.

maybe one way to solve this could be to take the full repository name under consideration. Instead of truncating the SpinApp’s name field to spin-hello-world, we could do something like bacongobbler-spin-hello-world (subsititute the forward slash with a hyphen). That would allow two users with the same repository name to deploy to two different SpinApps in the same namespace.

What do you think about that approach?

IMO magic should always fail safe. We could add a --force flag (and point at it in the failure message).

bacongobbler commented 8 months ago

I just realized we can solve this by recommending the user calls kubectl create instead of kubectl apply. kubectl create will fail if the resource already exists in the cluster.

bacongobbler commented 8 months ago

Switching this to a docs issue. We should document this workflow, as well as recommended steps to move forward.