we run docker-base which is depended on by e2e, but the e2e job does a kubectl apply on the helm output (which is always using the last version).
not sure i have enough time to fix this today and likely off for a week so rough plan:
make e2e job do a helm template with custom values (with version set) that ensures it's the currently built image that gets tested (as opposed to either latest or the chart.yaml pin)
figure out a way to get tag versions to get picked up by docker-metadata (tag based build)?
figure out how e2e job works in the tag based setup? (skip?)
possibly: combine features of base build + telemetry build and just have one docker build and get rid of docker-otel to simplify otel version selection in the chart (did a hacky _helper for it in gotpl...)
update test doc on kube.rs with any fixes
future long term
migrate from my personal dockerhub to kube-rs github registry
push chart to kube-rs github registry (also oci now)
The GHA docker-metadata action is meant to infer tags in https://github.com/kube-rs/controller-rs/blob/09ace63c3b7a6b7a0e695f118581660eaddd97e3/.github/workflows/ci.yml#L19-L27 via docker-metadata pep440.
but as can be seen in the last job it outputs:
which means the
e2e
ci which is meant to test the built image from the chart fails.I think this could be because we are not running the job in response to a tag push, but instead as a normal build. but there's also the bad ordering setup currently: https://github.com/kube-rs/controller-rs/actions/runs/5814041092
we run docker-base which is depended on by e2e, but the e2e job does a kubectl apply on the helm output (which is always using the last version).
not sure i have enough time to fix this today and likely off for a week so rough plan:
e2e
job do ahelm template
with custom values (with version set) that ensures it's the currently built image that gets tested (as opposed to eitherlatest
or the chart.yaml pin)future long term