Closed dridi closed 2 weeks ago
The cci spring cleaning is ready for a review, but I'm leaving it as a draft to alleviate the risk of merging the undesirable commit.
I force-pushed to remove the temporary commit, we'll see how the ASAN job fares.
It looks like there is always one ASAN test failing (and exactly one test case, for 3 CI runs out of 3). All three failures look like some kind of resource starvation, and this could be explained by the change of nested docker runtime (whatever centos:7 is shipping vs the current moby-engine in fedora:latest).
I will try to switch to podman and see whether we get a more stable sanitizers job (and maybe we can get rid of the sudo usage).
That didn't go so well, my next attempt will be to simply remove the docker-in-docker setup of build jobs.
I think we need the docker-in-docket bits to support arm builds. Don't we?
Or we could move to GitHub actions...
The only ARM builds we have is for the packaging workflow, and it relies on rclass
instead of a nested docker execution.
I think we need the docker-in-docket bits to support arm builds. Don't we?
Or we could move to GitHub actions...
ARM VMs are included in all plans: https://circleci.com/pricing/#comparison-table So, that should not be a blocker.
Test cases are failing, because localhost
resolves more than one IPv4 address:
**** v1 CLI RX|Message from VCC-compiler:
**** v1 CLI RX|Backend host "localhost:8081": resolves to too many addresses.
**** v1 CLI RX|Only one IPv4 and one IPv6 are allowed.
**** v1 CLI RX|Please specify which exact address you want to use, we found all of these:
**** v1 CLI RX|\t127.0.0.1:8081
**** v1 CLI RX|\t127.0.0.1:8081
**** v1 CLI RX|\t::1:8081
A few years ago I wanted our VSS code to de-duplicate entries before looping over them to deal with such broken systems.
https://github.com/varnishcache/varnish-cache/commit/14297da0839c284403930d66b460007e926692a7 passed with flying colors, so I rearranged the patch series accordingly with the last force-push.
https://github.com/varnishcache/varnish-cache/commit/2cd204bf2da10f88c92ea6442e0bded4ec205448 will exercise all jobs, and if it passes again, I will perform one last force-push to remove the temporary commit and see whether the sanitizer job is still stable, in which case I will be satisfied with this spring cleaning of our CI and attempt the same on supported branches.
I removed https://github.com/varnishcache/varnish-cache/commit/2cd204bf2da10f88c92ea6442e0bded4ec205448 after all checks passed.
The remaining failure looks like a race condition, firing up varnishadm
too soon:
** top === shell {
**** top shell_cmd|exec 2>&1 ;
**** top shell_cmd|\t# wait for startup vcl.load to complete
**** top shell_cmd|\tvarnishadm -n /tmp/vtc.88427.37d58456/t ping ||
**** top shell_cmd|\tvarnishadm -n /tmp/vtc.88427.37d58456/t ping
**** dT 0.723
[...]
**** dT 0.788
**** top shell_out|No -T in shared memory
**** top shell_out|No -T in shared memory
**** top shell_status = 0x0002
---- top shell_exit not as expected: got 0x0002 wanted 0x0000
So as far as I'm concerned, this is ready.
We retire:
This is not trivial because centos:7 is the base image of many jobs.