Closed ryanjbaxter closed 10 months ago
what branch is this failing in?
3.0.x and main
It seems like one of the other rancher containers is not being shut down (on jenkins, the local issue is something different clearly)
I can't reproduce the local problem, which I am trying to figure out first.
I deleted my local docker images, then from 3.0.x
:
wind57@wind57s-MacBook-Pro ~/D/p/s/s/s/spring-cloud-kubernetes-configuration-watcher (3.0.x)> java -version
openjdk version "17.0.1" 2021-10-19 LTS
OpenJDK Runtime Environment (build 17.0.1+12-LTS)
OpenJDK 64-Bit Server VM (build 17.0.1+12-LTS, mixed mode, sharing)
mvn clean install -Dspring-boot.build-image.builder=dashaun/builder:tiny -DskipTests
Then from Intellij, Run ActuatorRefreshIT
. Can you try re-building the entire project and images? I usually do:
wind57@wind57s-MacBook-Pro ~/D/p/s/spring-cloud-kubernetes (3.0.x)> mvn clean install -Dskip.build.image=true -DskipITs -DskipTests -T1C
wind57@wind57s-MacBook-Pro ~/D/p/s/spring-cloud-kubernetes (3.0.x)> cd spring-cloud-kubernetes-controllers/spring-cloud-kubernetes-configuration-watcher/
wind57@wind57s-MacBook-Pro ~/D/p/s/s/s/spring-cloud-kubernetes-configuration-watcher (3.0.x)> mvn clean install -Dspring-boot.build-image.builder=dashaun/builder:tiny -DskipTests
wind57@wind57s-MacBook-Pro ~/D/p/s/s/s/spring-cloud-kubernetes-configuration-watcher (3.0.x)> cd ../../spring-cloud-kubernetes-integration-tests/spring-cloud-kubernetes-k8s-client-configuration-watcher/
wind57@wind57s-MacBook-Pro ~/D/p/s/s/s/spring-cloud-kubernetes-k8s-client-configuration-watcher (3.0.x)> mvn clean install -Dspring-boot.build-image.builder=dashaun/builder:tiny -DskipTests
Looks very odd the content digest ... not found
issue
I forgot -Dspring-boot.build-image.builder=dashaun/builder:tiny
that fixed it locally, sorry about the noise on that 🤦♂️
No worries at all, I've done it far too many times too ))
The jenkins issue is rather interesting too:
16:50:55 20:50:55.543 [main] WARN tc.rancher/k3s:v1.25.4-k3s1 - Reuse was requested but the environment does not support the reuse of containers
16:50:55 To enable reuse of containers, you must set 'testcontainers.reuse.enable=true' in a file located at /home/jenkins/.testcontainers.properties
Has anything changed in your jenkins server recently? The thing is, without that re-use support, this issue is rather expected
In github actions, we set that re-use support via proper actions, for example:
but I don't know if that happens in jenkins
Not that I know of. If I look at the last green build I see that we were not reusing the container there as well
08:33:56.518 [main] INFO tc.testcontainers/ryuk:0.5.1 - Creating container for image: testcontainers/ryuk:0.5.1
08:33:56.715 [main] INFO tc.testcontainers/ryuk:0.5.1 - Container testcontainers/ryuk:0.5.1 is starting: 92c0fb507a19cbf49c33627c678971e91419deb7e249ec6cc10fc59baddce6fe
08:33:57.089 [main] INFO tc.testcontainers/ryuk:0.5.1 - Container testcontainers/ryuk:0.5.1 started in PT0.57130536S
08:33:57.095 [main] INFO org.testcontainers.utility.RyukResourceReaper - Ryuk started - will monitor and terminate Testcontainers containers on JVM exit
08:33:57.095 [main] INFO org.testcontainers.DockerClientFactory - Checking the system...
08:33:57.095 [main] INFO org.testcontainers.DockerClientFactory - ✔︎ Docker server version should be at least 1.6.0
08:33:57.096 [main] INFO tc.rancher/k3s:v1.25.4-k3s1 - Creating container for image: rancher/k3s:v1.25.4-k3s1
08:33:57.796 [main] WARN tc.rancher/k3s:v1.25.4-k3s1 - Reuse was requested but the environment does not support the reuse of containers
To enable reuse of containers, you must set 'testcontainers.reuse.enable=true' in a file located at /home/jenkins/.testcontainers.properties
08:33:59.040 [main] INFO tc.rancher/k3s:v1.25.4-k3s1 - Container rancher/k3s:v1.25.4-k3s1 is starting: 0dc8dcf679c57b699885f29e713fbb57a897789129afd8486a69995d96e4f2a1
08:34:18.540 [main] INFO tc.rancher/k3s:v1.25.4-k3s1 - Container rancher/k3s:v1.25.4-k3s1 started in PT21.444034897S
Do you run parallel builds on jenkins too? How we do it on github?
No its all run sequentially
I am honestly confused how this works on jenkins tbh. In our integration tests, all of them have a @BeforeAll
that issue a K3sContainer::start
, where K3sContainer CONTAINER = new FixedPortsK3sContainer ...
with fixed ports. So I would have expected any second integration test to fail with the error that you are seeing.
I am sure I miss something on how jenkins is set-up of what happens there :(
Maybe it is that for most cases the container is shut down before the next test is run but we happen to uncover a situation now where that is not the case. I ignored the tests for now and the builds are passing. I am working through our 2023.0.0-RC1 release at the moment but once that is done I will revert the changes and try to create the testcontainers.properties
file in our jenkins builds. I am guessing that will make the issue go away.
I see, it was probably ryuk that was shutting down the containers in this case. Otherwise, I agree that creating a testcontainers.properties
file with re-use enabled should fix it. This should also speed up tests a lot. Anyway, not going to bug you until you are done with the release.
so you reverted the code and tests are now enabled. this issue can be closed as such?
I made some changes to our Jenkins build so we create the .testcontainers.properties
file. It seems like this worked, but now its failing for another reason. In any case I think we can close this one
@wind57 I am seeing some failing IT on our Jenkins builds. There are two failures
ActuatorRefreshIT
This then causes
ActuatorRefreshMultipleNamespacesIT
to fail because I believe its assuming the first test starts the container.However when I try to run
ActuatorRefreshIT
locally on my mac I am getting an error trying to import the configuration watcher imageAny idea what is going on?