adoptium / aqa-tests

Home of test infrastructure for Adoptium builds
https://adoptium.net/aqavit
Apache License 2.0
128 stars 308 forks source link

Improvements related with rerun test jobs #5016

Closed sophia-guo closed 2 weeks ago

sophia-guo commented 7 months ago

Rerun test jobs was recently enabled in adoptium, which definitely helps in the latest releases. Here are some thoughts or issues we met during the releases:

Screenshot 2024-01-29 at 5 33 09 PM
smlambert commented 7 months ago

Regarding 3 versus 1, I think we will adjust to 1 at ci.adoptium.net, since often the failures we have are machine related, so there is no value to rerun 3x on the same machine. Since this was our 'trial use' of this feature, we set it to 3 to see how it would work.

smlambert commented 7 months ago

After doing triage for Jan 2024 CPU - there are several updates to TRSS that I intend to add, including the list of failed openjdk testcases (just as we add to TAP files) should be tracked in the TRSS database. Have to investigate, but this could be via changing how we configure jtreg, or actively printing out the TAP file contents to console and grabbing it at the end of the job.

smlambert commented 7 months ago

The rerun feature is ideally suited for environments that are more stable than ci.adoptium.net, but at the same time, if we wait for stability we may never get to try any new features.

andrew-m-leonard commented 6 months ago

I have been seeing extended test durations with recent builds, and have done a bit of digging into an example: x64AlpineLinux sanity.openjdk is now recently taking on average about 17 hours to complete: https://ci.adoptium.net/job/Test_openjdk22_hs_sanity.openjdk_x86-64_alpine-linux

The issue is even worse for https://ci.adoptium.net/job/Test_openjdk22_hs_extended.openjdk_x86-64_alpine-linux/ which is typically taking 2 days if it gets that far.

I'm not sure as it currently stands that amount of extra test run time is effective?

@sophia-guo @smlambert Thoughts? Can we just re-run the "testcases" ? Should we do a blanket exclude of the failing tests?

The problem seems most highlighted for Alpine Linux.

sxa commented 6 months ago

Do we understand what the failures are and whether they are system specific? That would seem to be the important thing to do the root analysis on. @Haroon-Khel are these on your radar? I thought we only ever did one rerun for each job (but that may be wrong based on what you've said( so I'm surprised if we're getting four.

If they're taking longer than expected (and since it's happening on sanity and extended that seems likely) then it could be another example of the currency detection issues we've been seeing in containers.

sxa commented 6 months ago

I'm not sure as it currently stands that amount of extra test run time is effective?

If we let it run to completion then we know we have a complete picture of the situation which should assist debugging. Also since we're only running one build a week it shouldn't cause as much of a problem as it did when we were running stuff nightly 🤷 But it did need to be understood, and probably as quite a high priority.

andrew-m-leonard commented 6 months ago

I'm not sure as it currently stands that amount of extra test run time is effective?

If we let it run to completion then we know we have a complete picture of the situation which should assist debugging. Also since we're only running one build a week it shouldn't cause as much of a problem as it did when we were running stuff nightly 🤷 But it did need to be understood, and probably as quite a high priority.

The failing tests are quite clear from the 2 re-runs, no need to wait for the subsequent 2 re-runs!

I think it does effectively highlight the problem :-) which is a bonus. It looks like it's mainly an Alpine issue, with extended.openjdk and sanity.openjdk, which between them seem to take 2 days to run

I'm going to examine and raise an exclude of the rogue tests

smlambert commented 6 months ago

For the record, I have also dropped rerunIterations from 3 to 1 in our build pipeline code (via https://github.com/adoptium/ci-jenkins-pipelines/pull/929).

sophia-guo commented 6 months ago

For openjdk tests it should be able to rerun testcases if the failure testcases number is not big.

smlambert commented 6 months ago

Also related as another suggested improvement to automatic reruns is https://github.com/adoptium/aqa-tests/issues/4874 (use of EXIT_SUCCESS flag).

smlambert commented 6 months ago

Also related as another suggested improvement to automatic reruns is https://github.com/adoptium/aqa-tests/issues/4379 (acknowledge and skip test targets tagged as notRerun in playlist).

sophia-guo commented 4 months ago

Example: rerun 4 targets takes 1.5 hours rerun 6 testcases takes 43 seconds.

https://github.com/adoptium/aqa-tests/issues/5016#issuecomment-1944289530

sophia-guo commented 4 months ago

If the rerun build is unstable the failed target deep history is still helpful. If the rerun build is successful then there is no need to provide the deep history of the failed targets in the rerun parents job.

Currently if the rerun build succeeds some of the failed targets in the rerun parents job are still there( Example A) , but some aren't ( Example B). If the rerun build is unstable some failed targets deep history are available(Example C) , some are not (Example D). Not sure why and confusing. Example B and C are expected behavior. Example come from https://trss.adoptium.net/resultSummary?parentId=66157115879917006ef59450

Example D: Test_openjdk22_hs_extended.openjdk_x86-64_alpine-linux ⚠️ UNSTABLE ⚠️

Test_openjdk22_hs_extended.openjdk_x86-64_alpine-linux_rerun ⚠️ UNSTABLE ⚠️ Rerun failed

Example C Test_openjdk22_hs_extended.openjdk_x86-64_linux ⚠️ UNSTABLE ⚠️

Test_openjdk22_hs_extended.openjdk_x86-64_linux_rerun ⚠️ UNSTABLE ⚠️ Rerun failed

java -version Test_openjdk22_hs_extended.openjdk_x86-64_linux_testList_0 ⚠️ UNSTABLE ⚠️ jdk_tools_1 => deep history 0/3 passed | possible issues jdk_build_0 => deep history 13/15 passed | possible issues

Test_openjdk22_hs_extended.openjdk_x86-64_linux_testList_2 ⚠️ UNSTABLE ⚠️ jdk_build_1 => deep history 3/5 passed | possible issues

Example A Test_openjdk22_hs_extended.openjdk_x86-64_mac ⚠️ UNSTABLE ⚠️

Test_openjdk22_hs_extended.openjdk_x86-64_mac_rerun ✅ SUCCESS ✅ Rerun all

java -version Test_openjdk22_hs_extended.openjdk_x86-64_mac_testList_1 ⚠️ UNSTABLE ⚠️ jdk_security3_1 => deep history 0/1 passed | possible issues jdk_jfr_1 => deep history 0/1 passed | possible issues

Test_openjdk22_hs_extended.openjdk_x86-64_mac_testList_2 ⚠️ UNSTABLE ⚠️ jdk_net_1 => deep history 7/8 passed | possible issues jdk_nio_1 => deep history 5/8 passed | possible issues

Example B: Test_openjdk22_hs_extended.openjdk_ppc64_aix ⚠️ UNSTABLE ⚠️

Test_openjdk22_hs_extended.openjdk_ppc64_aix_rerun ✅ SUCCESS ✅ Rerun all

sophia-guo commented 2 weeks ago

Close this as most concerns have been resolved.

The only one has no valid information anymore, if re happened can open a separate specific issue.