Open dexhorthy opened 4 years ago
@marccampbell @markpundsack getting some more questions on this one -- can I get an ack that y'all are aware of it?
Yeah, the admin console not respecting the namespace flag is a real pain and super confusing. Per Slack, it also seems that an environment variable of POD_NAMESPACE
is needed to get some of the namespaces correct and that too is confusing.
POD_NAMESPACE
at the very least governs what the Namespace
template function returns.
Yeah, unfortunately I think pull
needs a lot of attention and refactoring.
It was originally written to support a weird workflow which we no longer actually need. It attempts to "sideload" the application so that the admin console can install.
Since this was written, we've introduced automated installs, and airgap installs to existing clusters.
IMO, kots pull should be a way to get the Admin Console manifests (and application metadata i.e. branding and kots.io/v1beta1, application needed for RBAC permissions), and generate the manifests for it. KOTS supports airgap installs and I think this will turn pull
into a composable command instead of an overreaching command.
@marccampbell fwiw, I was directed to kots pull
as the means of doing the initial deploy of a kots application via GitOps (Argo CD to be specific). For me, simply being able to convert to GitOps post-install isn’t quite enough.
@genebean understood. There are definitely some bugs in this workflow we need to address before this is viable.
I think this is likely the same issue @dexhorthy mentioned, but when I try to apply the manifests generated by this I get the following error:
The ConfigMap "kotsadm-bundle-0" is invalid: metadata.annotations: Too long: must have at most 262144 characters
fwiw, I was directed to kots pull as the means of doing the initial deploy of a kots application via GitOps (Argo CD to be specific). For me, simply being able to convert to GitOps post-install isn’t quite enough.
I've worked around this limitation by kots install
-ing an empty app, configuring GitOps, then switching to the real channel.
The large ConfigMap appears to be fixed. kotsadm-bundle-0.yaml
currently clocks in at 123 KB by my count, which is well under the 1 MB Kubernetes limit for ConfigMaps. I'm able to kubectl apply
it without issue.
tl;dr
kubectl kots pull
seems pretty broken right now. This issue documents the problems and some manual workarounds to get around them.I tried a
kubectl kots pull
using the published sentry example license and was unable to apply the resulting yaml. I've tried this with a few apps and it should be pretty easy to reproduce. There are a few issues here:upstream/admin-console
have a hardcodednamespace: default
, regardless of what namespace is passed tokubectl kots pull
kots pull
There's one thing that I think is maybe an enhancement opportunity rather than a bug, but at the end of the deploy, kotsadm still wants you to upload a license, config, preflight checks, etc.
Repro steps
But running that gives an error of
The ConfigMap "kotsadm-bundle-0" is invalid: metadata.annotations: Too long: must have at most 262144 characters
Workaround step 1: removing config map
It seems this can be worked around by commenting the config map out of
base/kustomization.yaml
, but I am unclear as to whether this will break anythingWorkaround step 2: overriding default namespace in kustomize
After removing the config map and doing another apply, we get a whole bunch of issues with hardcoded namespaces:
So let's update base/kustomization.yaml with our namespace to see if that fixes it:
At the end of this, the apply works. I could probably have also done this namespace tweak in a downstream, so you could argue this falls on the end user, but I'd say it's better for things to work out of the box, which I think we could do by following our own advice and omitting
namespace
on all theadmin-console
resources.Unfortunately this still leaves kotsadm-api in a crash loop:
Workaround step 3: adding a cluster token via downstream
Let's make a downstream that patches in a cluster token:
Now we should have something like this to apply:
We can verify this works with a kustomize build
let's delete the previous secret so we can overwrite the value (no replace -k yet)
Let's verify really quick that we have some data in there now
And it looks like now our kotsadm-api pod is running okay. Hopefully this will also fix the crashloop in
kotsadm
as it waits for the bucket to be created in minio:success!
It looks like now kotsadm is up and running, as well as our Sentry app pods:
From here we can launch the admin console
We still have to go through and upload the license etc, but once we've gone through the UI setup things seems to be humming along nicely and we can launch the Sentry app on
localhost:9000