getporter / operator

The Porter Operator gives you a native, integrated experience for managing your bundles from Kubernetes. It is the recommended way to automate your bundle pipeline with support for GitOps.
https://porter.sh/operator
Apache License 2.0
31 stars 18 forks source link

Upgrade gives error: Installation does not exist #37

Closed JeremyGHutchins closed 2 years ago

JeremyGHutchins commented 3 years ago

Trying to follow the docs on running porter upgrade... I installed the operator with...

porter credentials generate porterops -r ghcr.io/getporter/porter-operator:canary
porter install porterops -c porterops -r ghcr.io/getporter/porter-operator:canary
porter invoke porterops --action configure-namespace --param namespace=porter-installs -c porterops

I have the following CRD:

apiVersion: porter.sh/v1
kind: Installation
metadata:
  namespace: porter-installs
  name: workbench-operator
spec:
  reference: "kineticadevcloud/workbench-operator:v7.1.4.0-rc20"
  action: "install"
  parameters:
    kubeconfig: "<here's a kube config>"

it installs succesfully. Then I kubectl apply the following "upgrade"...

apiVersion: porter.sh/v1
kind: Installation
metadata:
  namespace: porter-installs
  name: workbench-operator
spec:
  reference: "kineticadevcloud/workbench-operator:v7.1.4.0-rc21"
  action: "upgrade"
  parameters:
    kubeconfig: "<here's a kube config>"

But that fails and the log ends with...

Pulling bundle docker.io/kineticadevcloud/workbench-operator:v7.1.4.0-rc21
WARNING: both registry and reference were provided; using the reference value of docker.io/kineticadevcloud/workbench-operator:v7.1.4.0-rc21 for the bundle reference
WARNING: both registry and reference were provided; using the reference value of docker.io/kineticadevcloud/workbench-operator:v7.1.4.0-rc21 for the bundle reference
upgrading workbench-operator-install...
Resolved storage plugin to storage.porter.filesystem
/root/.porter/porter plugin run storage.porter.filesystem
Error: could not load installation workbench-operator: Installation does not exist

I also tried running kubectl edit installation directly, and that yielded the same result. I tried creating a separate CR for the upgrade as well, but it just changes the error message to "could not load installation name of upgrade cr: Installation does not exist" What am I doing wrong here? Thanks!

JeremyGHutchins commented 3 years ago

Note that I get the same results when I run the porter-hello example from the README.md.

loading porter configuration...
porter-config
porter v0.38.1 (3018b91c)
porter upgrade porter-hello --reference=getporter/porter-hello:v0.1.1 --debug --debug-plugins --driver=kubernetes
Pulling bundle docker.io/getporter/porter-hello:v0.1.1
upgrading porter-hello...
Resolved storage plugin to storage.porter.filesystem
/root/.porter/porter plugin run storage.porter.filesystem
Error: could not load installation porter-hello: Installation does not exist
carolynvs commented 3 years ago

Seems like there is a misconfiguration that is causing porter install to run with the filesystem plugin. So when you upgrade, it can't find the appropriate files because remote storage wasn't used.

Can you try installing the latest tagged version of the operator? v0.2.0 installs the operator with the default kubernetes plugin. The credentials still prompt for azure, you can just set them all to an empty string value and the bundle will use the kubernetes plugin for storage.

You can tell that it's using the right config by looking at the output of the porter pod and checking for the following line:

Resolved storage plugin to storage.kubernetes.storage

The storage plugin is configured in the secret named 'porter-configin theporter-installs` namespace. It contains a config.toml file that should look like this:

debug = true
debug-plugins = true
default-secrets = "kubernetes-secrets"
default-storage = "kubernetes-storage"

[[secrets]]
name = "kubernetes-secrets"
plugin = "kubernetes.secret"

[[storage]]
name = "kubernetes-storage"
plugin = "kubernetes.storage"
JeremyGHutchins commented 3 years ago

Thanks for the quick update, @carolynvs . v0.2.0 didn't help by itself, but when I also provided the above config.toml in the credentials, that seems to have fixed it!

...
/cnab/app/cnab/app/mixins/exec/runtimes/exec-runtime upgrade --debug
DEBUG Parsed Input:
&exec.Action{Name:"upgrade", Steps:[]exec.Step{exec.Step{Instruction:exec.Instruction{Description:"World 2.0", Command:"./helpers.sh", WorkingDir:"", Arguments:[]string{"upgrade"}, SuffixArguments:[]string(nil), Flags:builder.Flags(nil), Outputs:[]exec.Output(nil), SuppressOutput:false}}}}
/cnab/app ./helpers.sh upgrade
World 2.0
execution completed successfully!
Resolved storage plugin to storage.kubernetes.storage
Resolved plugin config: 
 map[string]interface {}(nil)
/root/.porter/plugins/kubernetes/kubernetes run storage.kubernetes.storage
Kubernetes client config file does not exist
Resolved storage plugin to storage.kubernetes.storage
Resolved plugin config: 
 map[string]interface {}(nil)
/root/.porter/plugins/kubernetes/kubernetes run storage.kubernetes.storage
Kubernetes client config file does not exist
error closing client during Kill
carolynvs commented 3 years ago

I'm glad you got it to work! Maybe there is a problem with how we are defaulting the config to use the plugin, I'll keep poking around to get that fixed.

Thanks for letting me know about the bug!