Closed mpermar closed 2 years ago
This issue is being marked as stale due to a long period of inactivity and will be closed in 5 days if there is no response.
@mpermar Thanks for reporting this!
Can you describe in a little more detail the steps necessary to reproduce this bug? Did you have a multi-cluster environment? did you use kapp deploy ...
on something similar to the spec above? Do you think the kubeconfigSecretRef
block is necessary to reproduce, or would you expect it to happen even with just a cluster/namespace specified?
@joe-kimmel-vmw this can be reproduced with this simple application:
apiVersion: kappctrl.k14s.io/v1alpha1
kind: App
metadata:
name: simple-app-2
namespace: default
spec:
cluster:
namespace: carvel-apps
kubeconfigSecretRef:
name: kubeconfigsecret
key: value
fetch:
- git:
url: https://github.com/vmware-tanzu/carvel-simple-app-on-kubernetes
ref: origin/develop
subPath: config-step-2-template
template:
- ytt: {}
deploy:
- kapp: {}
Then with two clusters. One runs kapp-controller, the other is the target cluster that kubeconfigsecret points to. If you try to deploy this application with "kubectl -f apply" and not doing any other steps, what will happen is that you will get an error complaining about the target namespace not being found.
friendlyDescription: Reconciling
inspect:
error: 'Inspecting: Error (see .status.usefulErrorMessage for details)'
exitCode: 1
stderr: 'kapp: Error: App ''simple-app-2-ctrl'' (namespace: carvel-apps) does
not exist: configmaps "simple-app-2-ctrl" not found'
stdout: Target cluster 'https://0BC7951A26AA51343F814E02409425F8.gr7.us-west-2.eks.amazonaws.com'
updatedAt: "2021-12-09T15:16:39Z"
observedGeneration: 1
template:
exitCode: 0
updatedAt: "2021-12-09T15:17:10Z"
usefulErrorMessage: 'kapp: Error: Creating app: namespaces "carvel-apps" not found'
If, subsequently you create the carvel-apps namespace in the target cluster, then reconciliation will succeed but the app gets deployed at the default namespace. I presume, without having looked at the code or knowing how the internals work, that kapp-controller tries to put some metadata in configmaps at the target namespace, but then ends up installing the application in a different namespace.
To me the above is quite misleading. If not, then the documentation might need to be modified accordingly.
@mpermar - we're still having some difficulty setting up an appropriate reproduction environment. Can you provide some details of what infra is underlying your multi-cluster install?
Interesting. I wouldn't expect this to be source/target dependant. I am experiencing this behaviour with an EKS target cluster and a kind local cluster.
I've reproduced this with kind, will investigate
I think this is actually a kapp issue, the command we issue is correct, but the example you're deploying has a namespace in the manifest, so it's ambiguous what namespace to deploy to.
$ kubectl apply -n foo -f https://gist.githubusercontent.com/benmoss/1313d45bbe7fb6698885801be6425bd9/raw/49ab952e1779cb218144f627081aeb55c3a8b5ed/depl.yml error: the namespace from the provided object "default" does not match the namespace "foo". You must pass '--namespace=default' to perform this operation.
$ kapp deploy -n foo --app bar -f https://gist.githubusercontent.com/benmoss/1313d45bbe7fb6698885801be6425bd9/raw/49ab952e1779cb218144f627081aeb55c3a8b5ed/depl.yml Target cluster 'https://127.0.0.1:46195' (nodes: kind-control-plane)
kapp: Error: Creating app: namespaces "foo" not found
$ k create ns foo namespace/foo created
$ kapp deploy -n foo --app bar -f https://gist.githubusercontent.com/benmoss/1313d45bbe7fb6698885801be6425bd9/raw/49ab952e1779cb218144f627081aeb55c3a8b5ed/depl.yml Target cluster 'https://127.0.0.1:46195' (nodes: kind-control-plane)
Changes
Namespace Name Kind Conds. Age Op Op st. Wait to Rs Ri default nginx-deployment Deployment - - create - reconcile - -
Op: 1 create, 0 delete, 0 update, 0 noop Wait to: 1 reconcile, 0 delete, 0 noop
Continue? [yN]: y
10:44:28AM: ---- applying 1 changes [0/1 done] ---- 10:44:28AM: create deployment/nginx-deployment (apps/v1) namespace: default 10:44:28AM: ---- waiting on 1 changes [0/1 done] ---- 10:44:28AM: ongoing: reconcile deployment/nginx-deployment (apps/v1) namespace: default 10:44:28AM: ^ Waiting for generation 2 to be observed 10:44:28AM: L ok: waiting on replicaset/nginx-deployment-65bb4555f4 (apps/v1) namespace: default 10:44:28AM: L ongoing: waiting on pod/nginx-deployment-65bb4555f4-kdg5w (v1) namespace: default 10:44:28AM: ^ Pending 10:44:28AM: L ongoing: waiting on pod/nginx-deployment-65bb4555f4-g9dcd (v1) namespace: default 10:44:28AM: ^ Pending 10:44:28AM: L ongoing: waiting on pod/nginx-deployment-65bb4555f4-fkp9d (v1) namespace: default 10:44:28AM: ^ Pending 10:44:29AM: ongoing: reconcile deployment/nginx-deployment (apps/v1) namespace: default 10:44:29AM: ^ Waiting for 3 unavailable replicas 10:44:29AM: L ok: waiting on replicaset/nginx-deployment-65bb4555f4 (apps/v1) namespace: default 10:44:29AM: L ongoing: waiting on pod/nginx-deployment-65bb4555f4-kdg5w (v1) namespace: default 10:44:29AM: ^ Pending: ContainerCreating 10:44:29AM: L ongoing: waiting on pod/nginx-deployment-65bb4555f4-g9dcd (v1) namespace: default 10:44:29AM: ^ Pending: ContainerCreating 10:44:29AM: L ongoing: waiting on pod/nginx-deployment-65bb4555f4-fkp9d (v1) namespace: default 10:44:29AM: ^ Pending: ContainerCreating 10:44:30AM: ok: reconcile deployment/nginx-deployment (apps/v1) namespace: default 10:44:30AM: ---- applying complete [1/1 done] ---- 10:44:30AM: ---- waiting complete [1/1 done] ----
Succeeded
I think this is the intended behavior of kapp, surprising as it might be. It's a supported workflow to deploy your app to one namespace, but with the resources it manages in another. So it makes sense that you need to have the carvel-apps
namespace, because that's where kapp will store the app ConfigMaps, but it respects the namespaces set in the manifest.
$ kubectl -n carvel-apps get cm NAME DATA AGE kube-root-ca.crt 1 45m simple-app-2-ctrl 1 45m simple-app-2-ctrl-change-gjh94 1 45m
@benmoss and team, I'm not going to argue if this is or is not the intended behavior of kapp, but again, this is what the documentation states:
# specifies that app should be deployed to destination cluster;
# by default, cluster is same as where this resource resides (optional; v0.5.0+)
cluster:
# specifies namespace in destination cluster (optional)
namespace: ns2
I might be misreading the specifies that app should be deployed to destination cluster, and perhaps the expected behavior is that the app is not deployed there in certain scenarios, but then those should be clarified and not let the user figure out where the app has gone.
I agree it's confusing, I'll reopen this for further thinking on how we can make this more intuitive
What steps did you take:
Have an application that sets a target namespace on a cluster but also on the metadata level. Like this one here:
What happened:
When you try to deploy an application like that one, kapp-controller will fail the deployment and force you to create the carvel-apps namespace in the target cluster. Then deployment succeeds, but if you check the carvel-apps namespace there will be nothing there. The app gets deployed in the default namespace.
What did you expect:
I would expect from kapps-controller to deploy the application in the target namespace that has forced me to create. However, if kapps-controller detected this and didn't ask me to create the namespace, that would be equally fine.
Anything else you would like to add:
What I did understood from the docs was that the namespace at the top level was going to be the default but that would be overwritten by the cluster level namespace. Perhaps this misunderstanding is just a documentation issue, I don't know.
Vote on this request
This is an invitation to the community to vote on issues, to help us prioritize our backlog. Use the "smiley face" up to the right of this comment to vote.
👍 "I would like to see this addressed as soon as possible" 👎 "There are other more important things to focus on right now"
We are also happy to receive and review Pull Requests if you want to help working on this issue.