Closed khanhtc1202 closed 1 week ago
Welcome @khanhtc1202! It looks like this is your first PR to karmada-io/karmada 🎉
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 53.04%. Comparing base (
c09ca0d
) to head (3a6052c
). Report is 2 commits behind head on master.
:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Hello Khanh, So excited to meet you here! Great thanks.
It looks great and I'll try it on my side.
But after a quick look, there is a failed case in the workflow, but it shows successful in the end. Do you know why?
cc @lxtywypc who might be interested in it.
Hi @RainbowMango thanks for the warm greeting 😄
But after a quick look, there is a failed case in the workflow, but it shows successful in the end. Do you know why?
TBH, the test result returned as FAILED in my local even before I made this change, and it was the reason I adopted gotestsum in order to find out which test failed. Does the make test
pass in your local?
I just ran a test with go test
, gotest
and gotestsum
, the output of gotestsum
is weird:
Output of go test
, acts normally:
-bash-5.0# go test ./pkg/util/helper -run=GetMinTolerationTime -v
=== RUN TestGetMinTolerationTime
=== RUN TestGetMinTolerationTime/no_noExecuteTaints
-1ns=== RUN TestGetMinTolerationTime/no_usedTolerations
0s=== RUN TestGetMinTolerationTime/with_noExecuteTaints_and_usedTolerations
59.999896389s=== RUN TestGetMinTolerationTime/usedTolerantion.TolerationSeconds_is_nil
-1ns=== RUN TestGetMinTolerationTime/noExecuteTaints.TimeAdded_is_nil
-1ns=== RUN TestGetMinTolerationTime/find_the_latest_trigger_time
59.999839202s=== RUN TestGetMinTolerationTime/trigger_time_is_up
0s--- PASS: TestGetMinTolerationTime (0.00s)
--- PASS: TestGetMinTolerationTime/no_noExecuteTaints (0.00s)
--- PASS: TestGetMinTolerationTime/no_usedTolerations (0.00s)
--- PASS: TestGetMinTolerationTime/with_noExecuteTaints_and_usedTolerations (0.00s)
--- PASS: TestGetMinTolerationTime/usedTolerantion.TolerationSeconds_is_nil (0.00s)
--- PASS: TestGetMinTolerationTime/noExecuteTaints.TimeAdded_is_nil (0.00s)
--- PASS: TestGetMinTolerationTime/find_the_latest_trigger_time (0.00s)
--- PASS: TestGetMinTolerationTime/trigger_time_is_up (0.00s)
PASS
ok github.com/karmada-io/karmada/pkg/util/helper (cached)
Output of gotest
, normal:
-bash-5.0# gotest ./pkg/util/helper -run=GetMinTolerationTime -v
=== RUN TestGetMinTolerationTime
=== RUN TestGetMinTolerationTime/no_noExecuteTaints
-1ns=== RUN TestGetMinTolerationTime/no_usedTolerations
0s=== RUN TestGetMinTolerationTime/with_noExecuteTaints_and_usedTolerations
59.999896389s=== RUN TestGetMinTolerationTime/usedTolerantion.TolerationSeconds_is_nil
-1ns=== RUN TestGetMinTolerationTime/noExecuteTaints.TimeAdded_is_nil
-1ns=== RUN TestGetMinTolerationTime/find_the_latest_trigger_time
59.999839202s=== RUN TestGetMinTolerationTime/trigger_time_is_up
0s--- PASS: TestGetMinTolerationTime (0.00s)
--- PASS: TestGetMinTolerationTime/no_noExecuteTaints (0.00s)
--- PASS: TestGetMinTolerationTime/no_usedTolerations (0.00s)
--- PASS: TestGetMinTolerationTime/with_noExecuteTaints_and_usedTolerations (0.00s)
--- PASS: TestGetMinTolerationTime/usedTolerantion.TolerationSeconds_is_nil (0.00s)
--- PASS: TestGetMinTolerationTime/noExecuteTaints.TimeAdded_is_nil (0.00s)
--- PASS: TestGetMinTolerationTime/find_the_latest_trigger_time (0.00s)
--- PASS: TestGetMinTolerationTime/trigger_time_is_up (0.00s)
PASS
ok github.com/karmada-io/karmada/pkg/util/helper (cached)
Output of gotestsum
, abnormal:
-bash-5.0# gotestsum ./pkg/util/helper -run=GetMinTolerationTime -v
✓ pkg/util/helper (cached)
=== Failed
=== FAIL: pkg/util/helper TestGetMinTolerationTime (unknown)
-1ns=== RUN TestGetMinTolerationTime/no_usedTolerations
0s=== RUN TestGetMinTolerationTime/with_noExecuteTaints_and_usedTolerations
59.999439296s=== RUN TestGetMinTolerationTime/usedTolerantion.TolerationSeconds_is_nil
-1ns=== RUN TestGetMinTolerationTime/noExecuteTaints.TimeAdded_is_nil
-1ns=== RUN TestGetMinTolerationTime/find_the_latest_trigger_time
59.999302798s=== RUN TestGetMinTolerationTime/trigger_time_is_up
0s--- PASS: TestGetMinTolerationTime (0.00s)
--- PASS: TestGetMinTolerationTime/no_noExecuteTaints (0.00s)
--- PASS: TestGetMinTolerationTime/no_usedTolerations (0.00s)
--- PASS: TestGetMinTolerationTime/with_noExecuteTaints_and_usedTolerations (0.00s)
--- PASS: TestGetMinTolerationTime/usedTolerantion.TolerationSeconds_is_nil (0.00s)
--- PASS: TestGetMinTolerationTime/noExecuteTaints.TimeAdded_is_nil (0.00s)
--- PASS: TestGetMinTolerationTime/find_the_latest_trigger_time (0.00s)
--- PASS: TestGetMinTolerationTime/trigger_time_is_up (0.00s)
=== FAIL: pkg/util/helper TestGetMinTolerationTime/no_noExecuteTaints (unknown)
-1ns=== RUN TestGetMinTolerationTime/no_usedTolerations
0s=== RUN TestGetMinTolerationTime/with_noExecuteTaints_and_usedTolerations
59.999439296s=== RUN TestGetMinTolerationTime/usedTolerantion.TolerationSeconds_is_nil
-1ns=== RUN TestGetMinTolerationTime/noExecuteTaints.TimeAdded_is_nil
-1ns=== RUN TestGetMinTolerationTime/find_the_latest_trigger_time
59.999302798s=== RUN TestGetMinTolerationTime/trigger_time_is_up
0s--- PASS: TestGetMinTolerationTime (0.00s)
--- PASS: TestGetMinTolerationTime/no_noExecuteTaints (0.00s)
--- PASS: TestGetMinTolerationTime/no_usedTolerations (0.00s)
--- PASS: TestGetMinTolerationTime/with_noExecuteTaints_and_usedTolerations (0.00s)
--- PASS: TestGetMinTolerationTime/usedTolerantion.TolerationSeconds_is_nil (0.00s)
--- PASS: TestGetMinTolerationTime/noExecuteTaints.TimeAdded_is_nil (0.00s)
--- PASS: TestGetMinTolerationTime/find_the_latest_trigger_time (0.00s)
--- PASS: TestGetMinTolerationTime/trigger_time_is_up (0.00s)
DONE 2 tests, 2 failures in 1.006s
The gotestsum
shows 2 failures but returns 0
.
DONE 2 tests, 2 failures in 1.006s
-bash-5.0# echo $?
0
Do you know why gotestsum
act like this?
@RainbowMango I found the reason and fixed it by this PR: https://github.com/karmada-io/karmada/pull/4852 We have a print statement in the test logic, and it causes this output in the test JSON output
JSON output
< {"Time":"2024-04-19T23:11:03.422127+07:00","Action":"output","Package":"command-line-arguments","Test":"TestGetMinTolerationTime/no_noExecuteTaints","Output":"=== RUN TestGetMinTolerationTime/no_noExecuteTaints\n"}
< {"Time":"2024-04-19T23:11:03.422137+07:00","Action":"output","Package":"command-line-arguments","Test":"TestGetMinTolerationTime/no_noExecuteTaints","Output":"-1ns"}
< {"Time":"2024-04-19T23:11:03.422154+07:00","Action":"output","Package":"command-line-arguments","Test":"TestGetMinTolerationTime/no_noExecuteTaints","Output":"--- PASS: TestGetMinTolerationTime/no_noExecuteTaints (0.00s)\n"}
That -1ns
makes gotestsum can not parse the go test output correctly 😄 (Contacted the gotestsum maintainer team and got the answer here https://github.com/gotestyourself/gotestsum/issues/400).
I think we can remove that printout statement since printing in the test is just for debugging, and it doesn't have any meaning. With that PR being merged, this change would be fine.
Good Job! Shall we rebase this PR since #4852 already merged?
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by:
The full list of commands accepted by this bot can be found here.
Something went wrong with my branch commit, let me open another one
What type of PR is this?
/kind feature
What this PR does / why we need it:
I started playing with the Karmada source code today and
make test
is really hard to get what was failed with the size of the current test suite. AFAIK, we also use thegotest
tool to color the test output, but the key point is it's still hard to scroll up until I can reach the test failed (it's over my terminal scroll limit). Also,gotest
is no longer maintained, and I am reading around for community-preferred solutions for this. I found out thatgotestsum
is quite a good choice. Some other big projects are using it (ref: their used by list), and it has been maintained well too.With this PR changed, I can get a summary of which tests had failed at the end of the output like this (and it's colorful as well)
Which issue(s) this PR fixes: Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?: