Open llarsson opened 2 years ago
The Index ID for the OpenSearch query is hardcoded and will not generalize.
Do I get correctly that this is more a discussion than an actionable item?
Let me give you my thoughts on the points you raised:
The README should mention that the APP_DOMAIN must also be updated.
Yes, PR welcome! (Although, I would prefer if GitHub Actions supported non-secret environment variables like GitLab CI.)
Creating a new repo from this template, for some reason known only to GitHub itself, prevents me from getting the Environments part of the Project Settings. So I can't actually use an Environment with my GitHub Actions.
Yes, super-annoying! I'm a bit reluctant to simply changing the instructions to "just use repository secrets", since that is not quite the right practice here. Ideally, you should have one pipeline, pointing to maybe 3 environments: development, staging and production.
Shall we maybe just add a note "use repository secrets if env secrets is for some reason unavailable"?
If I follow the official instructions and RBAC rules that are suggested for CI/CD ServiceAccounts, my workflow fails, because it's not allowed to list Nodes. Workaround: just list the Pods instead, I guess?
Nice catch! :smile: I think I accidentally used some credentials from an investigation task. So yes, listing Pods would be better. It would also serve to add a bit more context to the CI/CD job.
...it also can't list certificates.
I think we should add that to the official docs.
To make this super friendly to new users, we should list all the steps:
- Make a new project in Harbor to host the images.
- Export the Image Pull Secret into the default ServiceAccount as per instructions so deployment can succeed.
So ... this is the discussion I wanted to provoke. Should we list all these steps? Or should we do them ourselves for the user?
Change the values here in the workflow definition file as per instructions, including APP_DOMAIN (or just put them as encrypted secrets, because WHO CARES if they are encrypted or not -- they are not supposed to change, anyway).
I tried to make a pipeline which deploys all demo-*
branches to demo-*.$APP_DOMAIN
, so that multiple app developers can each test their own branch. It would be rather inconvenient if APP_DOMAIN was starred out and couldn't be clicked directly. I'd argue changing APP_DOMAIN once is less inconvenient than not being able to click on a URL every time one deploys a development branch. But yes, sorry to sound like a broken record, I'd prefer GitLab CI's non-secret variables here.
The Index ID for the OpenSearch query is hardcoded and will not generalize.
Do you have evidence for this? I was hoping that all Compliant Kubernetes environments magically use the same ID. :smile: Unfortunately, I looked at the OpenSearch Dashboard source code and didn't find a way around using the Index ID.
If I update code on main
, it's not getting auto-deployed, because the new image will still be something with main
in its container tag, and therefore it's like latest
in the sense that it will appear to be the same as before.
I suppose this would apply also if I had named it demo-llarsson
and updated it.
So this should be fixed by adding e.g. git rev-parse --short HEAD
to it or something better that maybe GitHub has.
And yeah, @cristiklein, this is not so much a single issue as a discussion about various hickups I've encountered along the way. Not all are perhaps "issues", and as such, I didn't feel like going through the motions of making new issues for them all, individually.
I made it work nicely by changing to this tagging strategy
TAG: ${{ github.ref_name }}-${{ github.sha }}
I made it work nicely by changing to this tagging strategy
TAG: ${{ github.ref_name }}-${{ github.sha }}
I was considering this, but I see a risk that the container registry will be littered by development images and fill up.
To me, the optimal solution is:
helm upgrade \
--install \
$HELM_RELEASE \
deploy/ck8s-user-demo/ \
--wait \
--set image.repository=harbor.$DOMAIN/$REGISTRY_PROJECT/ck8s-user-demo \
--set image.tag=$TAG \
--set image.pullPolicy=Always \
--set ingress.hostname=$APP_DOMAIN \
--set podAnnotations.gitSha=$GITHUB_SHA
This forces the Deployment to recreate the Pods and -- thanks to pullPolicy always -- re-pull the new image.
Oh darn, Harbor doesn't have a retention policy setting?! That's pretty bad.
I was hoping one could set a policy such that images with a certain prefix never expire, but others get deleted within a few days.
Since that's clearly lacking, let's go with the approach you wrote! I'll make the update in my repo based off of this template and try it out.
Giving this a bit more thought: it would be incredibly hard to have a simple "rolling back" strategy if we always overwrite the old image tag.
So I kind of don't like this approach, after all.
Hmm.
There actually are retention settings! So I would prefer to use that instead. Keep the latest 10 images around.
And version-tagged images can be kept around forever.
TIL: https://goharbor.io/docs/2.0.0/working-with-projects/working-with-images/create-tag-retention-rules/
I agree, proper tags with tag retention looks better.
Can we then have git describe --tags --always
as the container image tag? This way you get v0.0.1
for production-like images, and v0.0.1-1-gc242b89
for development-like images.
Yeah, that's prettier :)
Hmm. Running that on just "main" on a given commit does not include the name "main", which would be prettier. So how about the name of the branch you're on and the output from git describe --tags --always
?
Hmm. Running that on just "main" on a given commit does not include the name "main", which would be prettier. So how about the name of the branch you're on and the output from
git describe --tags --always
?
Ufff, we're getting very opinionated here. :smile: Maybe we should start making a distinction between:
1) Was this pipeline triggered by a branch and is meant for development? Then use ${GITHUB_REF}-${GITHUB_SHA}
. I'd say the tag is less useful here.
2) Was this pipeline triggered by a tag and is meant for staging and/or production? Then use git describe --tags
.
Hmm, yes, this is entering almost the philosophical realm now: if I don't change anything about my application (it's still v1.11.3
as far as my features are concerned), but I do rebuild the image to pull in new dependencies and avoid vulnerabilities in the base image, something needs to still trigger a redeployment.
If we would always have ${GITHUB_REF}-${GITHUB_SHA}
, simply restarting the workflow will do what I want in that case.
That's how I've kept it in the first pull request for this repo, which I am about to make.
I may add comments to this issue with more as it comes up.