xmake-io / xmake

🔥 A cross-platform build utility based on Lua
https://xmake.io
Apache License 2.0
9.87k stars 776 forks source link

print unit tests report #5314

Closed TOMO-CAT closed 2 months ago

TOMO-CAT commented 2 months ago

https://github.com/xmake-io/xmake/issues/5305

如这个 issue 所述,希望在单测跑完后输出一个总的报告,因为单测较多时刷屏较多,难以知道具体哪些单测挂了。

参考了 bazel 的实现: image

waruqi commented 2 months ago

verbose 的时候,为啥要 print 两遍 passed/failed 进度,而且错误信息也没见后面详细显示,还是在首次执行时候显示了下。。感觉没啥意义。

ruki-2:test ruki$ xmake test -v
[ 50%]: cache compiling.release src/compile_1.cpp
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang -c -Qunused-arguments -target x86_64-apple-macos14.0 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX14.0.sdk -fvisibility=hidden -fvisibility-inlines-hidden -O3 -DNDEBUG -o build/.objs/test_10/macosx/x86_64/release/src/compile_1.cpp.o src/compile_1.cpp
[ 50%]: cache compiling.release src/compile_2.cpp
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang -c -Qunused-arguments -target x86_64-apple-macos14.0 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX14.0.sdk -fvisibility=hidden -fvisibility-inlines-hidden -O3 -DNDEBUG -o build/.objs/test_11/macosx/x86_64/release/src/compile_2.cpp.o src/compile_2.cpp
running tests ...
[ 11%]: test_10/compile_fail     .................................... passed 0.000s
src/compile_2.cpp:4:5: error: static assertion failed: error
    static_assert(0, "error");
    ^             ~
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX14.0.sdk/usr/include/c++/v1/__config:477:32: note: expanded from macro 'static_assert'
#    define static_assert(...) _Static_assert(__VA_ARGS__)
                               ^              ~~~~~~~~~~~
1 error generated.
[ 14%]: test_11/compile_pass     .................................... failed 0.001s
hello foo

[ 28%]: test_1/args              .................................... passed 0.027s
hello xmake

[ 28%]: test_1/default           .................................... passed 0.025s
hello xmake

[ 28%]: test_1/fail_output       .................................... passed 0.024s
hello foo

[ 28%]: test_1/pass_output       .................................... passed 0.022s
hello xmake

[ 28%]: test_13/stub_1           .................................... passed 0.019s
hello xmake

[ 28%]: test_14/stub_2           .................................... passed 0.018s
hello xmake

[ 28%]: test_15/stub_n           .................................... passed 0.016s
hello foo

[ 28%]: test_2/args              .................................... passed 0.015s
hello xmake

[ 47%]: test_2/default           .................................... passed 0.049s
hello xmake

[ 47%]: test_2/fail_output       .................................... passed 0.047s
hello foo

[ 47%]: test_2/pass_output       .................................... passed 0.038s
hello foo

[ 47%]: test_3/args              .................................... passed 0.036s
hello xmake

[ 47%]: test_3/default           .................................... passed 0.035s
hello xmake

[ 47%]: test_3/fail_output       .................................... passed 0.030s
hello foo

[ 47%]: test_3/pass_output       .................................... passed 0.017s
hello foo

[ 47%]: test_4/args              .................................... passed 0.016s
hello xmake

[ 66%]: test_4/default           .................................... passed 0.035s
hello xmake

[ 66%]: test_4/fail_output       .................................... passed 0.032s
hello foo

[ 66%]: test_4/pass_output       .................................... passed 0.023s
hello2 foo

[ 66%]: test_5/args              .................................... passed 0.021s
hello2 xmake

[ 66%]: test_5/default           .................................... passed 0.019s
hello2 xmake

matched failed output: hello2 .*, actual output: hello2 xmake

[ 66%]: test_5/fail_output       .................................... failed 0.018s
hello2 foo

not matched passed output: hello foo, actual output: hello2 foo
[ 66%]: test_5/pass_output       .................................... failed 0.016s
hello foo

[ 66%]: test_6/args              .................................... passed 0.014s
hello xmake

[ 85%]: test_6/default           .................................... passed 0.036s
hello xmake

[ 85%]: test_6/fail_output       .................................... passed 0.034s
hello foo

[ 85%]: test_6/pass_output       .................................... passed 0.026s
hello foo

run test_7/args failed, exit code: 255
[ 85%]: test_7/args              .................................... failed 0.024s
hello xmake

run test_7/default failed, exit code: 255
[ 85%]: test_7/default           .................................... failed 0.022s
hello xmake

run test_7/fail_output failed, exit code: 255
[ 85%]: test_7/fail_output       .................................... failed 0.020s
hello foo

run test_7/pass_output failed, exit code: 255
[ 85%]: test_7/pass_output       .................................... failed 0.017s
hello xmake

[ 85%]: test_8/args              .................................... passed 0.015s
hello xmake

[100%]: test_8/default           .................................... passed 0.030s
hello xmake

[100%]: test_8/fail_output       .................................... passed 0.028s
hello xmake

not matched passed output: hello foo, actual output: hello xmake
[100%]: test_8/pass_output       .................................... failed 0.019s
hello foo

[100%]: test_9/args              .................................... passed 0.016s
hello xmake

[100%]: test_9/default           .................................... passed 0.015s
hello xmake

[100%]: test_9/fail_output       .................................... passed 0.013s
hello foo

[100%]: test_9/pass_output       .................................... passed 0.011s
run test_timeout/run_timeout failed, exit code: -1, exit error: wait process timeout
[100%]: test_timeout/run_timeout .................................... failed 1.003s

report of tests:
[ 11%]: test_10/compile_fail     .................................... passed 0.000s
[ 14%]: test_11/compile_pass     .................................... failed 0.001s
[ 28%]: test_1/args              .................................... passed 0.027s
[ 28%]: test_1/default           .................................... passed 0.025s
[ 28%]: test_1/fail_output       .................................... passed 0.024s
[ 28%]: test_1/pass_output       .................................... passed 0.022s
[ 28%]: test_13/stub_1           .................................... passed 0.019s
[ 28%]: test_14/stub_2           .................................... passed 0.018s
[ 28%]: test_15/stub_n           .................................... passed 0.016s
[ 28%]: test_2/args              .................................... passed 0.015s
[ 47%]: test_2/default           .................................... passed 0.049s
[ 47%]: test_2/fail_output       .................................... passed 0.047s
[ 47%]: test_2/pass_output       .................................... passed 0.038s
[ 47%]: test_3/args              .................................... passed 0.036s
[ 47%]: test_3/default           .................................... passed 0.035s
[ 47%]: test_3/fail_output       .................................... passed 0.030s
[ 47%]: test_3/pass_output       .................................... passed 0.017s
[ 47%]: test_4/args              .................................... passed 0.016s
[ 66%]: test_4/default           .................................... passed 0.035s
[ 66%]: test_4/fail_output       .................................... passed 0.032s
[ 66%]: test_4/pass_output       .................................... passed 0.023s
[ 66%]: test_5/args              .................................... passed 0.021s
[ 66%]: test_5/default           .................................... passed 0.019s
[ 66%]: test_5/fail_output       .................................... failed 0.018s
[ 66%]: test_5/pass_output       .................................... failed 0.016s
[ 66%]: test_6/args              .................................... passed 0.014s
[ 85%]: test_6/default           .................................... passed 0.036s
[ 85%]: test_6/fail_output       .................................... passed 0.034s
[ 85%]: test_6/pass_output       .................................... passed 0.026s
[ 85%]: test_7/args              .................................... failed 0.024s
[ 85%]: test_7/default           .................................... failed 0.022s
[ 85%]: test_7/fail_output       .................................... failed 0.020s
[ 85%]: test_7/pass_output       .................................... failed 0.017s
[ 85%]: test_8/args              .................................... passed 0.015s
[100%]: test_8/default           .................................... passed 0.030s
[100%]: test_8/fail_output       .................................... passed 0.028s
[100%]: test_8/pass_output       .................................... failed 0.019s
[100%]: test_9/args              .................................... passed 0.016s
[100%]: test_9/default           .................................... passed 0.015s
[100%]: test_9/fail_output       .................................... passed 0.013s
[100%]: test_9/pass_output       .................................... passed 0.011s
[100%]: test_timeout/run_timeout .................................... failed 1.003s
78% tests passed, 9 tests failed out of 42, spent 1.150s
TOMO-CAT commented 2 months ago

觉没啥意义。

因为很多单测中间 stdout 日志很多,如果不在最后 print 运行结果的话很难往上翻。最后就只有一个总的成功单测数而已。

总之是希望提供类似 bazel / blade 这种运行完所有单测之后的报告,具体知道是哪些单测挂了,形式的话倒不是那么重要。由于涉及到安全,我们有一些库要求单测覆盖率从而单测有 300 多个,根本没法往上翻。另外每个单测失败后的日志也需要重定向到一个目录。

不过我倒是觉得这个进度可以删掉没啥用,参考 bazel 的直接打印单测名就行了,失败的单测把错误日志存储的路径打一下。

Issues-translate-bot commented 2 months ago

Bot detected the issue body's language is not English, translate it automatically.


It doesn't make any sense.

Because there are a lot of stdout logs in the middle of many single tests, it is difficult to scroll up without printing the running results at the end. In the end, there is only a total number of successful single tests.

waruqi commented 2 months ago

那就写文件么,上面 bazel 不就这么干的么,也没必要 dump 两遍进度

前面正常回显,跑完后,仅把 failed 的 tests 存到文件,给个路径不就好了

Issues-translate-bot commented 2 months ago

Bot detected the issue body's language is not English, translate it automatically.


Then just write the file. Isn’t that what bazel does above? There is no need to dump the progress twice.

The previous echo is normal. After running, only the failed tests are saved to the file. Wouldn't it be nice to give a path?

TOMO-CAT commented 2 months ago

好了

可以的,另外 xmake test 日志不是同步输出的对于一些运行时间比较长的单测看着像是卡住了,xmake run 是同步输出的

Issues-translate-bot commented 2 months ago

Bot detected the issue body's language is not English, translate it automatically.


OK

Yes, in addition, xmake test logs are not output synchronously. For some single tests that take a long time to run, they seem to be stuck. xmake run is output synchronously.

waruqi commented 2 months ago

好了

可以的,另外 xmake test 日志不是同步输出的对于一些运行时间比较长的单测看着像是卡住了,xmake run 是同步输出的

默认并行执行,加速测试,xmake run 现在也是并行的。。除非显式设置 xmake test -j1, xmake run -j1

TOMO-CAT commented 2 months ago

默认并行执行,加速测试,xmake run 现在也是并行的。。除非显式设置 xmake test -j1, xmake run -j1

好的明白

Issues-translate-bot commented 2 months ago

Bot detected the issue body's language is not English, translate it automatically.


Okay

Yes, in addition, the xmake test log is not output synchronously. For some single tests that take a long time to run, it seems to be stuck, but xmake run is output synchronously.

Parallel execution by default, to speed up testing, xmake run is now also parallel. . Unless explicitly set xmake test -j1, xmake run -j1

Issues-translate-bot commented 2 months ago

Bot detected the issue body's language is not English, translate it automatically.


Parallel execution by default to speed up testing, xmake run is now also parallel. . Unless explicitly set xmake test -j1, xmake run -j1

ok understand

TOMO-CAT commented 2 months ago

除非显式设置 xmake test -j1, xmake run -j1

我本地测试了 xmake test -yvD -j1 日志还是会等到这个单测跑完了才输出,不是实时打印的,是我用的版本太老了吗?

Issues-translate-bot commented 2 months ago

Bot detected the issue body's language is not English, translate it automatically.


Unless explicitly set xmake test -j1, xmake run -j1

I tested xmake test -yvD -j1 locally and the log still waits until the single test is finished before outputting it. It is not printed in real time. Is the version I am using too old?

waruqi commented 2 months ago

? 都重定向了,怎么实时。。原本就是跑完了,再回显。。

Issues-translate-bot commented 2 months ago

Bot detected the issue body's language is not English, translate it automatically.


? All redirected, how to do it in real time? . Originally, it was to be echoed after running. .

Issues-translate-bot commented 2 months ago

Bot detected the issue body's language is not English, translate it automatically.


? It’s all redirected, how can it be done in real time. . Originally, it was to be echoed after running. .

-j1 can't be run in real time. For single tests that take a long time to run, it's like being stuck. Can't it be optimized?

TOMO-CAT commented 2 months ago

? 都重定向了,怎么实时。。原本就是跑完了,再回显。。

所以就是说 xmake test -j1 也不能和 xmake run 一样实时显示输出,运行时间较长的单测就跟卡死了一样然后突然打出来一大串,这个不能优化的吗?

Issues-translate-bot commented 2 months ago

Bot detected the issue body's language is not English, translate it automatically.


? It’s all redirected, how can it be done in real time. . Originally, it was to be echoed after running. .

So xmake test -j1 cannot display the output in real time like xmake run. A single test that takes a long time to run is like being stuck, and then suddenly a large string is typed out. Can't this be optimized?

waruqi commented 2 months ago

? 都重定向了,怎么实时。。原本就是跑完了,再回显。。

所以就是说 xmake test -j1 也不能和 xmake run 一样实时显示输出,运行时间较长的单测就跟卡死了一样然后突然打出来一大串,这个不能优化的吗?

设计就是如此,跟性能无关,跑测试就是全部重定向的,只能等跑完后输出。。如果实时输出,那就真要刷屏了,而且事后也没法捕获到stdout/stderr输出,也没法做一些 stdout/stderr 相关的测试条件判断。。

跟优不优化无关。它跟 xmake run 原本就是不同的使用场景。

Issues-translate-bot commented 2 months ago

Bot detected the issue body's language is not English, translate it automatically.


? Everything is redirected, how to do it in real time. . Originally, it was to be echoed after running. .

So xmake test -j1 cannot display the output in real time like xmake run. A single test that takes a long time to run is like being stuck and then suddenly prints out a large string. Can't this be optimized?

The design is like this, and it has nothing to do with performance. When running the test, everything is redirected, and the output can only be output after the run is completed. . If the output is real-time, the screen will really be refreshed, and the stdout/stderr output cannot be captured afterwards, nor can some stdout/stderr related test condition judgments be made. .

It has nothing to do with optimization or not. It has different usage scenarios from xmake run.

TOMO-CAT commented 2 months ago

前面正常回显,跑完后,仅把 failed 的 tests 存到文件,给个路径不就好了

对于有些库几百个单测,想在最后看每个单测的运行时间就很麻烦,最后的报告还是得保留运行时间、然后失败的给出日志重定向的文件,进度倒是无所谓。和 bazel 展示的内容一致就行。

所以现在在想非 -v 的情况的下要打印什么内容呢,如果和之前一样的话就会重复打印两次运行时间。

Issues-translate-bot commented 2 months ago

Bot detected the issue body's language is not English, translate it automatically.


The previous echo is normal. After running, only the failed tests are saved to the file. Wouldn’t it be nice to give a path?

For some libraries with hundreds of single tests, it is very troublesome to check the running time of each single test at the end. The final report still has to retain the running time, and then give the log redirection file if it fails. The progress does not matter. Just match the content displayed by bazel.

So now I am thinking about what should be printed when it is not -v. If it is the same as before, the running time will be printed twice.

TOMO-CAT commented 2 months ago

比如这个库 117 个单测,每个单测还有日志输出,如果最后只输出 fail 的单测对于开发人员来说很难往上翻看正常的单测耗时是否正常: image

Issues-translate-bot commented 2 months ago

Bot detected the issue body's language is not English, translate it automatically.


For example, this library has 117 single tests, and each single test also has log output. If only the fail single test is output in the end, it will be difficult for developers to look up and see whether the normal single test time is normal: image

waruqi commented 2 months ago

如果最后只输出 fail 的单测对于开发人员来说很难往上翻看正常的单测耗时是否正常:

这句话本身就是矛盾的,既然耗时不正常,就应该输出 fail,而不是作为正常输出,又去往上翻看时间 去判断是否不正常

目前测试是可以配置 timeout ,检测不正常耗时 然后 fail 掉的。

TOMO-CAT commented 2 months ago

如果最后只输出 fail 的单测对于开发人员来说很难往上翻看正常的单测耗时是否正常:

这句话本身就是矛盾的,既然耗时不正常,就应该输出 fail,而不是作为正常输出,又去往上翻看时间 去判断是否不正常

目前测试是可以配置 timeout ,检测不正常耗时 然后 fail 掉的。

这是 warning 和 error 的区别吧,很多 gpu 推理的单测都会配置超时时间,但是耗时也是关注的一环,不能因为时间稍微长点就直接 fail

waruqi commented 2 months ago

那就还是做开头你 bazel 里面的方式做,输出 log file ,一遍进度

Issues-translate-bot commented 2 months ago

Bot detected the issue body's language is not English, translate it automatically.


If only the single test that fails fails is output in the end, it will be difficult for developers to look up and see whether the normal single test time is normal:

This sentence itself is contradictory. Since the time-consuming is abnormal, fail should be output, instead of outputting it as normal, and then looking up the time to determine whether it is abnormal.

Currently, the test can be configured with timeout, which will take a long time to detect abnormality and then fail.

Issues-translate-bot commented 2 months ago

Bot detected the issue body's language is not English, translate it automatically.


If only the single test that fails fails is output in the end, it will be difficult for developers to look up and see whether the normal single test time is normal:

This sentence itself is contradictory. Since the time-consuming is abnormal, fail should be output, instead of outputting it as normal, and then looking up the time to determine whether it is abnormal.

Currently, timeout can be configured for testing, and it will take a long time to detect abnormality and then fail.

This is the difference between warning and error. Many single tests of GPU inference will configure a timeout, but time-consuming is also a concern. You cannot fail directly just because the time is slightly longer.

Issues-translate-bot commented 2 months ago

Bot detected the issue body's language is not English, translate it automatically.


Then just do it the way you did in bazel at the beginning, output log file, and go through the progress

waruqi commented 2 months ago

我改进过了,试下这个 patch https://github.com/xmake-io/xmake/pull/5387

  1. 修复进度显示
  2. 正常默认不输出 stdout/stderr/errors,仅仅显示测试结果
  3. xmake test -v 直接在终端显示 stdout/stderr/errors
  4. xmake test -vD 会将所有 stdout/stderr/errors 输出都重定向到文件,并提供文件路径
Issues-translate-bot commented 2 months ago

Bot detected the issue body's language is not English, translate it automatically.


I have improved it, try this patch https://github.com/xmake-io/xmake/pull/5387

  1. Repair progress display
  2. Normally, stdout/stderr/errors are not output by default, and only test results are displayed.
  3. xmake test -v displays stdout/stderr/errors directly in the terminal
  4. xmake test -vD will redirect all stdout/stderr/errors output to files and provide the file path
TOMO-CAT commented 2 months ago

我改进过了,试下这个 patch #5387

  1. 修复进度显示
  2. 正常默认不输出 stdout/stderr/errors,仅仅显示测试结果
  3. xmake test -v 直接在终端显示 stdout/stderr/errors
  4. xmake test -vD 会将所有 stdout/stderr/errors 输出都重定向到文件,并提供文件路径

这个预计啥时候合入 dev 呢,我们准备拉最新的版本了

waruqi commented 2 months ago

merge 了

Issues-translate-bot commented 2 months ago

Bot detected the issue body's language is not English, translate it automatically.


I have improved it, try this patch #5387

  1. Repair progress display
  2. Normally, stdout/stderr/errors are not output by default, and only test results are displayed.
  3. xmake test -v displays stdout/stderr/errors directly in the terminal
  4. xmake test -vD will redirect all stdout/stderr/errors output to files and provide the file path

When is this expected to be integrated into dev? We are preparing to pull the latest version.

Issues-translate-bot commented 2 months ago

Bot detected the issue body's language is not English, translate it automatically.


merged