xmake-io / xmake

🔥 A cross-platform build utility based on Lua
https://xmake.io
Apache License 2.0
9.57k stars 762 forks source link

Add explicit support for testing: `xmake test` #3381

Closed paul-reilly closed 9 months ago

paul-reilly commented 1 year ago

Is your feature request related to a problem? Please describe.

This is related to target name clashes described in #3380 where multiple subdirectories have a tests target.

Describe the solution you'd like

xmake should have the facility to easily compile and execute tests:

xmake test [target dir or more specific target] [options]

Related to the target clashes described in #3380, when including subdirectories where the author was not aware that defining tests was a going to be a thing in the future, we could have something like :

includes("external/**/xmake.lua", { test_targets = { "tests", "mytests" }})

... which would stop those targets being built as part of a non-test build.

cargo test is highly recommended and since xmake supports languages with integrated testing already:

For complete testing in C/C++:

Describe alternatives you've considered

No response

Additional context

No response

waruqi commented 1 year ago

xmake should have the facility to easily compile and execute tests:

why not use xmake run

xmake run target

or run a group of targets (tests)

target("a")
    set_group("test")

target("b")
    set_group("test")

target("c")
    set_group("test")
xmake run -g test

support running asan/tsan/ubsan builds

add_rules("mode.asan", "mode.tsan")
xmake f -m asan
xmake
xnake run -g test
paul-reilly commented 1 year ago

Thanks, I forgot about groups in xmake actually. That's good but wouldn't run cargo test or whatever on targets for other languages.

xmake test could be more universal and cleaner since it would be geared towards testing. If something is a test then:

Re asan,xmake config --mode=asan resets the toolchain at the moment. I switch to clang, change mode to asan and am back on gcc. We have to xmake config --mode=asan --toolchain=clang to preserve the non-default toolchain. Little things like these can be handled automagically.

Anyway, it's just an idea. The mytool test feature is well received in modern language tooling so having it in xmake would be a plus point.

waruqi commented 1 year ago

naming clashes in subdirectories are irrelevant since xmake could build them with temporary namesincludes

xmake test does not solve this problem, it is determined by the loading policy of the xmake.lua interpreter. When xmake loads xmake.lua, all targets with the same name are already merged.

all project languages are supported/unified

xmake run -g test also support them.

I didn't find much difference between xmake run -g test and xmake test.

Re asan, resets the toolchain at the moment. I switch to clang, change mode to asan and am back on gcc. We have to to preserve the non-default toolchain. Little things like these can be handled automagically.xmake config --mode=asanxmake config --mode=asan --toolchain=clang

These are not generic requirements, and xmake test is not very good at wrapping them, they are more like user-specific special requirements that you can solve with a custom task, which is also very handy.

task("test_asan")
    on_run(function ()
        os.exec("xmake f -m asan")
        os.exec("xmake")
        os.exec("xmake run")
    end)
xmake test_asan
waruqi commented 1 year ago

I've considered xmake test before, but so far I haven't found much difference between it and xmake run -g test.

I won't consider it until I know that xmake test can do more than what xmake run -g test can do.

At least what I know now is still not enough.

becknik commented 1 year ago

I'd be also very glad to have a test command. I really think this would lower the redundancy and maybe ease the testing in a time where test-driven development should be the consensus.

I recently discovered this build tool (neat work btw) so I tried to write a template xmake.lua with a test task as you propoed in a commentary above. f there is a way to do some of these things in neat way in a normal xmake.lua file, I'd be really thankful if you might tell me :)

-- ...
target("<project_name>") -- TODO
    set_default(true)
    set_kind("binary")
    add_files("src/main.cppm", "src/base/*.cppm")

task("test")
    on_run( function()
        os.execute("xmake build -g test -j8")
        os.execute("xmake run -g test")
    end)

target("test_prep")
    set_group("test") -- Ideally should declare a special test group
    set_kind("phony") -- A test kind would be ideal for this
    add_files("src/base/*.cppm")
    -- Test Dependencies:
    add_files("src/base/*.cppm") -- Should include the main projects files to every test-groups target to reduce redundancy
    add_packages("boost/test") -- Should implicitly be included on other test-graoups targets to reduce redundancy

target("blackbox")
    set_group("test")
    set_kind("binary")
    add_files("src/test/blackbox.cppm")

The problem I came across is that there is no way I'm aware of to set "rules" for groups, which might be nice to reduce redundancy. This however could be eased by just adding a set_kind("test") which automatically includes the source files of the same test group/ set_group().

It might be good to have a progress bar for the progress of the test(-groups) execution or messages on test target passes.

I have not mutch experience of this filed of tests, but this are some things I think might be beneficial to this project. Running automated tests properly are features of all big build systems like CTest, gradle, maven etc. .

jingkaimori commented 10 months ago

xmake run -g test will fail at the first non-zero return, thus not all test target are executed. ctest in cmake will run all tests instead. Can xmake has similar behavior with ctest?

waruqi commented 10 months ago

xmake run -g test will fail at the first non-zero return, thus not all test target are executed. ctest in cmake will run all tests instead. Can xmake has similar behavior with ctest?

No way now, you can print error logs directly and return zero code in test target.

waruqi commented 9 months ago

I have supported it now. https://github.com/xmake-io/xmake/pull/4259

$ xmake update -s dev

Add test to target (no arguments)

target("test")
    add_tests("testname")

Add run arguments

target("test")
    add_tests("testname", {runargs = "arg1"})
    add_tests("testname", {runargs = {"arg1", "arg2"}})

It will use global target/set_runargs if no {runargs = } in add_tests

target("test")
    add_tests("testname")
    set_runargs("arg1", "arg2")

Set run directory

target("test")
    add_tests("testname", {rundir = os.projectdir()})

It will use global target/set_rundir if no {rundir = } in add_tests

target("test")
    add_tests("testname")
    set_rundir("$(projectdir)")

Set run environments

target("test")
    add_tests("testname", {runenvs = {LD_LIBRARY_PATH = "/lib"}})

It will use global target/add_runenvs if no {runenvs = } in add_tests

target("test")
    add_tests("testname")
    add_runenvs("LD_LIBRARY_PATH", "/lib")

Match passed outputs

It also supports lua pattern matching.

target("test")
    add_tests("testname", {pass_outputs = "hello"})
    add_tests("testname", {pass_outputs = "hello *"})
    add_tests("testname", {pass_outputs = {"hello", "hello *"}})

Match failed outputs

It also supports lua pattern matching.

target("test")
    add_tests("testname", {fail_outputs = "hello"})
    add_tests("testname", {fail_outputs = "hello *"})
    add_tests("testname", {fail_outputs = {"hello", "hello *"}})

Trim output first before matching pass/fail outputs

target("test")
    add_tests("testname", {trim_output = true, pass_outputs = "foo", fail_outputs = "hello"})

Plain match output

we can also set {plain = true} to force to use plain text matching.

It will disable lua pattern matching.

target("test")
    add_tests("testname", {plain = true, pass_outputs = "foo", fail_outputs = "hello"})

Set test group

target("test")
    add_tests("testname", {group = "foo"})

It will use global target/set_group if no {group = } in add_tests

target("test")
    add_tests("testname")
    set_group("foo")

then we can run the tests with the given group name, and it also supports lua pattern matching.

$ xmake test -g "foo"
$ xmake test -g "foo*"

Run the given tests

we need pass target and test names, e.g.

$ xmake test targetname/testname

And we can pass lua pattern to match some tests

$ xmake test */testname
$ xmake test targetname/*
$ xmake test targetname/foo*

Automatically build

It will be built automatically even if we set default = false

target("test")
    add_tests("testname")
    set_default(false)
$ xmake test
[ 25%]: cache compiling.release src/main.cpp
[ 50%]: linking.release test
running tests ...
[100%]: test/testname .................................... passed 6.000s

100% tests passed, 0 tests failed out of 1, spent 0.006s

Custom scripts

We can define before_test, on_test and after_test to custom test script in rule/target.

target("test")
     on_test(function (target, opt)
        print(opt.name, opt.runenvs, opt.runargs, opt.pass_outputs)

        -- do test
        -- ...

        -- passed 
        return true

        -- failied
        return false, errors
     end)

Parallel test

$ xmake test -jN

Examples

add_rules("mode.debug", "mode.release")

for _, file in ipairs(os.files("src/test_*.cpp")) do
    local name = path.basename(file)
    target(name)
        set_kind("binary")
        set_default(false)
        add_files("src/" .. name .. ".cpp")
        add_tests("default")
        add_tests("args", {runargs = {"foo", "bar"}})
        add_tests("pass_output", {trim_output = true, runargs = "foo", pass_outputs = "hello foo"})
        add_tests("fail_output", {fail_outputs = {"hello2 .*", "hello xmake"}})
end
ruki-2:test ruki$ xmake test
running tests ...
[  2%]: test_1/args        .................................... passed 7.000s
[  5%]: test_1/default     .................................... passed 5.000s
[  8%]: test_1/fail_output .................................... passed 5.000s
[ 11%]: test_1/pass_output .................................... passed 6.000s
[ 13%]: test_2/args        .................................... passed 7.000s
[ 16%]: test_2/default     .................................... passed 6.000s
[ 19%]: test_2/fail_output .................................... passed 6.000s
[ 22%]: test_2/pass_output .................................... passed 6.000s
[ 25%]: test_3/args        .................................... passed 7.000s
[ 27%]: test_3/default     .................................... passed 7.000s
[ 30%]: test_3/fail_output .................................... passed 6.000s
[ 33%]: test_3/pass_output .................................... passed 6.000s
[ 36%]: test_4/args        .................................... passed 6.000s
[ 38%]: test_4/default     .................................... passed 6.000s
[ 41%]: test_4/fail_output .................................... passed 5.000s
[ 44%]: test_4/pass_output .................................... passed 6.000s
[ 47%]: test_5/args        .................................... passed 5.000s
[ 50%]: test_5/default     .................................... passed 6.000s
[ 52%]: test_5/fail_output .................................... failed 6.000s
[ 55%]: test_5/pass_output .................................... failed 5.000s
[ 58%]: test_6/args        .................................... passed 7.000s
[ 61%]: test_6/default     .................................... passed 6.000s
[ 63%]: test_6/fail_output .................................... passed 6.000s
[ 66%]: test_6/pass_output .................................... passed 6.000s
[ 69%]: test_7/args        .................................... failed 6.000s
[ 72%]: test_7/default     .................................... failed 7.000s
[ 75%]: test_7/fail_output .................................... failed 6.000s
[ 77%]: test_7/pass_output .................................... failed 5.000s
[ 80%]: test_8/args        .................................... passed 7.000s
[ 83%]: test_8/default     .................................... passed 6.000s
[ 86%]: test_8/fail_output .................................... passed 6.000s
[ 88%]: test_8/pass_output .................................... failed 5.000s
[ 91%]: test_9/args        .................................... passed 6.000s
[ 94%]: test_9/default     .................................... passed 6.000s
[ 97%]: test_9/fail_output .................................... passed 6.000s
[100%]: test_9/pass_output .................................... passed 6.000s

80% tests passed, 7 tests failed out of 36, spent 0.242s
image

Verbose/Diagnosis output

$ xmake test -vD
image
waruqi commented 9 months ago

Return exit code (-1) on failure (default)

It will signal a test failure to CI or other tools. If we can suppress it, we need set

set_policy("test.return_zero_on_failure", true)

Stop to test on the first failure

set_policy("test.stop_on_first_failure", true)
mikezackles commented 9 months ago

It might be nice/ergonomic if there were also a built-in way to test that a target fails to compile. This would allow for testing static_asserts, etc.

paul-reilly commented 9 months ago

One thing with this implementation is that it's for separate test targets, which is good.

Cargo/Rust do inline tests and the target is rebuilt when testing. Doctest/C++ supports inline tests where in some cases we'd have to create duplicate targets just now. Ideally the target would be rebuilt with a define or possibly with an extra cpp stub file containing a couple of defines.

target("foo")
    set_kind("binary")
    add_files("src/**.cpp")
    add_tests("foo_test", { defines = "TESTING", files = "tests/doctest_stub.cpp" })
    add_packages("doctest")

An issue with this would be not wanting to clobber the non-testing build with the test build, so building a separate target in the background would be ideal.

Raildex commented 8 months ago

As it is now, we need to generate a new target for testing. This can be cumbersome when you have a lot of dependencies.

Having an add_tests function in target scope that just automagically generates an executable test runner (similar to what gtest does) would be great and extremely helpful.

target(MyPrimaryTarget)
set_kind("binary")
add_files("src/module/myFile.cpp")
add_test_suite("My first test suite", "gtest", "src/module1.cpp", "src/module/myFile.cpp", "src/test/test1.cpp")

on_compile_test_suite_file(function (testsuite, testsuite_name, framework, file) -- allow customizing file compilation when generating a test suite runner
  file:add_defines("TESTING_DEFINED")
end)

on_discover_tests(function (testsuite, testsuite_name, framework, file) -- called for each file in "add_test_suite"
  if(framework == "gtest") then
    testsuite:add_test(gtest_discover_tests(file)) -- specific test discover function for a particular framework?
  end
end)

target_end()

I would expect something like this. this xmake script should generate two targets: MyPrimaryTarget with my production-ready executable My first test suite that is exactly the same (includedirs, dependencies, etc.) as MyPrimaryTarget, but only contains test code.

waruqi commented 8 months ago

It might be nice/ergonomic if there were also a built-in way to test that a target fails to compile. This would allow for testing static_asserts, etc.

I have supported it. https://github.com/xmake-io/xmake/pull/4334

target("test_10")
    set_kind("binary")
    set_default(false)
    add_files("src/compile.cpp")
    add_tests("compile", {build_only = true})
image image
mikezackles commented 8 months ago

I have supported it. https://github.com/xmake-io/xmake/pull/4334

Thank you!

If I'm reading correctly, this behavior is backwards compared to what I was requesting/suggesting. An example of this type of test would be to verify that a static_assert is triggered by a certain piece of code. Since the point is to verify that the code does not compile, it needs to happen at the build system level.

So the desired behavior would be more like add_tests("compile", {should_not_compile = true}), and then the test you've written should succeed because the compilation fails.

Thanks again!

waruqi commented 8 months ago

I have supported it. #4334

Thank you!

If I'm reading correctly, this behavior is backwards compared to what I was requesting/suggesting. An example of this type of test would be to verify that a is triggered by a certain piece of code. Since the point is to verify that the code does not compile, it needs to happen at the build system level.static_assert

So the desired behavior would be more like , and then the test you've written should succeed because the compilation fails.add_tests("compile", {should_not_compile = true})

Thanks again!

? I don't understand it.

mikezackles commented 8 months ago

? I don't understand it.

4336 might give a little bit of context.

As far as I know, a separate type of test to verify build success by itself isn't that useful as any regular test needs to build to succeed anyway.

But consider this code:

template <typename T>
bool foo(T val) {
  if constexpr (std::is_same_v<T, int>) {
    printf("int!\n");
  } else if constexpr (std::is_same_v<T, float>) {
    printf("float!\n");
  } else {
    static_assert(false, "unsupported type");
  }
}

int main(int, char**) {
  foo("BAD");
  return 0;
}

This code should fail to compile.

I admit this isn't the greatest example, but it can be useful to test that specific conditions like this trigger compilation failures. That way you know that, for example, if a user writes code that is not supported by your library, their code will fail to compile rather than compiling and resulting in broken behavior. This is something that needs explicit support from the build/test system.

Sorry, hope that makes it a bit clearer.

waruqi commented 8 months ago

? I don't understand it.

4336 might give a little bit of context.

As far as I know, a separate type of test to verify build success by itself isn't that useful as any regular test needs to build to succeed anyway.

But consider this code:

template <typename T>
bool foo(T val) {
  if constexpr (std::is_same_v<T, int>) {
    printf("int!\n");
  } else if constexpr (std::is_same_v<T, float>) {
    printf("float!\n");
  } else {
    static_assert(false, "unsupported type");
  }
}

int main(int, char**) {
  foo("BAD");
  return 0;
}

This code should fail to compile.

I admit this isn't the greatest example, but it can be useful to test that specific conditions like this trigger compilation failures. That way you know that, for example, if a user writes code that is not supported by your library, their code will fail to compile rather than compiling and resulting in broken behavior. This is something that needs explicit support from the build/test system.

Sorry, hope that makes it a bit clearer.

ok, I see

SirLynix commented 8 months ago

wouldn't it be more intuitive to keep build_only but to set that a test has to fail?

test("should_fail")
    set_expectedresult("failure")

this would allow to set build_only/runtime test to fail

waruqi commented 8 months ago

? I don't understand it.

4336 might give a little bit of context.

As far as I know, a separate type of test to verify build success by itself isn't that useful as any regular test needs to build to succeed anyway. But consider this code:

template <typename T>
bool foo(T val) {
  if constexpr (std::is_same_v<T, int>) {
    printf("int!\n");
  } else if constexpr (std::is_same_v<T, float>) {
    printf("float!\n");
  } else {
    static_assert(false, "unsupported type");
  }
}

int main(int, char**) {
  foo("BAD");
  return 0;
}

This code should fail to compile. I admit this isn't the greatest example, but it can be useful to test that specific conditions like this trigger compilation failures. That way you know that, for example, if a user writes code that is not supported by your library, their code will fail to compile rather than compiling and resulting in broken behavior. This is something that needs explicit support from the build/test system. Sorry, hope that makes it a bit clearer.

ok, I see

I have improve it.


target("test_10")
    set_kind("binary")
    set_default(false)
    add_files("src/compile.cpp")
    add_tests("compile_fail", {build_should_fail = true})

target("test_11")
    set_kind("binary")
    set_default(false)
    add_files("src/compile.cpp")
    add_tests("compile_pass", {build_should_pass = true})
image
waruqi commented 8 months ago

wouldn't it be more intuitive to keep build_only but to set that a test has to fail?

test("should_fail")
    set_expectedresult("failure")

this would allow to set build_only/runtime test to fail

This approach is more complex to implement and less maintainable. There is no need to add a scope and more apis just for an xmake test action.

waruqi commented 8 months ago

One thing with this implementation is that it's for separate test targets, which is good.

Cargo/Rust do inline tests and the target is rebuilt when testing. Doctest/C++ supports inline tests where in some cases we'd have to create duplicate targets just now. Ideally the target would be rebuilt with a define or possibly with an extra cpp stub file containing a couple of defines.

target("foo")
    set_kind("binary")
    add_files("src/**.cpp")
    add_tests("foo_test", { defines = "TESTING", files = "tests/doctest_stub.cpp" })
    add_packages("doctest")

An issue with this would be not wanting to clobber the non-testing build with the test build, so building a separate target in the background would be ideal.

As it is now, we need to generate a new target for testing. This can be cumbersome when you have a lot of dependencies.

Having an add_tests function in target scope that just automagically generates an executable test runner (similar to what gtest does) would be great and extremely helpful.

target(MyPrimaryTarget)
set_kind("binary")
add_files("src/module/myFile.cpp")
add_test_suite("My first test suite", "gtest", "src/module1.cpp", "src/module/myFile.cpp", "src/test/test1.cpp")

on_compile_test_suite_file(function (testsuite, testsuite_name, framework, file) -- allow customizing file compilation when generating a test suite runner
  file:add_defines("TESTING_DEFINED")
end)

on_discover_tests(function (testsuite, testsuite_name, framework, file) -- called for each file in "add_test_suite"
  if(framework == "gtest") then
    testsuite:add_test(gtest_discover_tests(file)) -- specific test discover function for a particular framework?
  end
end)

target_end()

I would expect something like this. this xmake script should generate two targets: MyPrimaryTarget with my production-ready executable My first test suite that is exactly the same (includedirs, dependencies, etc.) as MyPrimaryTarget, but only contains test code.

I have supported it.


target("test_13")
    set_kind("binary")
    set_default(false)
    add_files("src/test_1.cpp")
    add_tests("stub_1", {files = "tests/stub_1.cpp", defines = "STUB_1"})

target("test_14")
    set_kind("binary")
    set_default(false)
    add_files("src/test_2.cpp")
    add_tests("stub_2", {files = "tests/stub_2.cpp", defines = "STUB_2"})

target("test_15")
    set_kind("binary")
    set_default(false)
    add_files("src/test_1.cpp")
    add_tests("stub_n", {files = "tests/stub_n*.cpp", defines = "STUB_N"})

see https://github.com/xmake-io/xmake/pull/4343

paul-reilly commented 8 months ago

Thanks, that interface is perfect! One thing is if there's just a static or shared library with inline tests then xmake reports:

➜  xmake test
nothing to test

I would expect the add_tests to create a binary:

-- just now, add_tests does not create a binary with this:
target("foo")
    set_kind("shared")
    add_files("src/foo.cpp")
    add_tests("foo_test", { files = "tests/doctest_stub.cpp" })
    add_packages("doctest")

-- when it should do something like this with the library target:
target("foo_test")
    set_kind("binary")
    add_files("src/foo.cpp")
    add_files("tests/doctest_stub.cpp")
    add_packages("doctest")
waruqi commented 8 months ago

Thanks, that interface is perfect! One thing is if there's just a static or shared library with inline tests then xmake reports:

➜  xmake test
nothing to test

I would expect the add_tests to create a binary:

-- just now, add_tests does not create a binary with this:
target("foo")
    set_kind("shared")
    add_files("src/foo.cpp")
    add_tests("foo_test", { files = "tests/doctest_stub.cpp" })
    add_packages("doctest")

-- when it should do something like this with the library target:
target("foo_test")
    set_kind("binary")
    add_files("src/foo.cpp")
    add_files("tests/doctest_stub.cpp")
    add_packages("doctest")

we always need provide a main entry, so you can add a binary target to test it and set default/false

It will be completely isolated from the foo shared target, which is much cleaner. Configuring doctest to the foo shared target may affect the compilation of foo globally, such as some macros, header search paths and so on.

target("foo")
    set_kind("shared")
    add_files("src/foo.cpp")

target("foo_test")
    set_kind("binary")
    set_default(false)
    add_files("src/main.cpp")
    add_deps("foo")
    add_tests("foo_test1", { files = "tests/doctest_stub1.cpp" })
    add_tests("foo_test2", { files = "tests/doctest_stub2.cpp" })
    add_packages("doctest")
paul-reilly commented 8 months ago

Doctest provides a main entry point when the macro DOCTEST_CONFIG_IMPLEMENT_WITH_MAIN is used so in the example there the stub file has the entry point.

With DOCTEST_CONFIG_DISABLE everything is removed, leaving a clean target:

https://github.com/doctest/doctest/blob/master/doc/markdown/configuration.md#doctest_config_disable

So at the moment it's very close to being perfect, it's just not generating a binary from a library target when building the target specifically to run inline tests. I don't know if that could be automagic or if a has_main parameter would need to be added in case of clashed with other use cases.

waruqi commented 8 months ago

Doctest provides a entry point when the macro is used so in the example there the stub file has the entry point.main``DOCTEST_CONFIG_IMPLEMENT_WITH_MAIN

With everything is removed, leaving a clean target:DOCTEST_CONFIG_DISABLE

https://github.com/doctest/doctest/blob/master/doc/markdown/configuration.md#doctest_config_disable

So at the moment it's very close to being perfect, it's just not generating a binary from a library target when building the target specifically to run inline tests. I don't know if that could be automagic or if a parameter would need to be added in case of clashed with other use cases.has_main

I recommend using the above for dynamic library testing, it's generic and the cleanest.

Even though doctest provides main, we can configure it in a separate binary target.

And adding the doctest package to the dynamic library target, it can't be completely clean. It may globally introduce includedirs, macros, flags that interfere with the compilation of other code. Even though doctest may not have it, there's no guarantee that other test packages won't have it.

The introduction of includedirs may also cause some header search conflicts.

paul-reilly commented 8 months ago

That doesn't work though because the tests are inline, in foo.cpp. The stub file is a method of implementing an entry point with doctest... the files added in add_tests don't have to have tests in them in this scenario.

Here's another example, that doesn't use a stub file:

target("foo")
    set_kind("shared")
    add_files("src/foo.cpp") --<-- all tests are in here
    add_includedirs("include", { public = true })
    add_includedirs("bar")
    add_defines("DOCTEST_CONFIG_DISABLE")
    add_packages("doctest", "fmt")
    add_tests("foo_test", { undefines = "DOCTEST_CONFIG_DISABLE" })  --<-- now tests are not stripped

We can't link to that because all the tests are stripped out and there are no separate files of tests. (undefines would be nice to have to disable the previous definition in the build instead of source code.)

This is all for more automagic inline testing like we get in Zig, Rust, Nim, D or whatever. We can always create separate targets for situations where it's not appropriate.

Raildex commented 8 months ago

Am I misunderstanding this whole test thing? I can't compile a test with a main() because my main target already contains one.

error: run_test.cpp.obj : error LNK2005: main already defined in main.cpp.obj
build\\windows\\x64\\debug\\Runtime_run testing.exe : fatal error LNK1169: one or more multiply defined symbols found
target("Runtime")
    set_languages("c++latest")
    set_warnings("all", "error")
    set_kind("binary")
    add_files("src/main.cpp","src/common/*.cpp", "src/gameflow/*.cpp", 
    add_tests("run testing", {files = "test/run_test.cpp"})
target_end()

I'd expect add_tests to be similar as add_files, so the tests can link whatever they need and gets their own main()

waruqi commented 8 months ago

Am I misunderstanding this whole test thing? I can't compile a test with a because my main target already contains one.main()

error: run_test.cpp.obj : error LNK2005: main already defined in main.cpp.obj
build\\windows\\x64\\debug\\Runtime_run testing.exe : fatal error LNK1169: one or more multiply defined symbols found
target("Runtime")
    set_languages("c++latest")
    set_warnings("all", "error")
    set_kind("binary")
    add_files("src/main.cpp","src/common/*.cpp", "src/gameflow/*.cpp", 
    add_tests("run testing", {files = "test/run_test.cpp"})
target_end()

I'd expect to be the same as , so the tests can link whatever they need and gets their own add_tests``add_files``main()

I cannot support it, and xmake can't detect and remove source files with main from add_files, it's unreliable. I wouldn't do that either.

you should define macro to switch it.

target("Runtime")
    set_languages("c++latest")
    set_warnings("all", "error")
    set_kind("binary")
    add_files("src/main.cpp","src/common/*.cpp", "src/gameflow/*.cpp", 
    add_tests("run testing", {defines = "ENABLE_TEST", files = "test/run_test.cpp"})

src/main.cpp

int main (int argc, char** argv)
{
    call_tests();
}

test/run_test.cpp

void call_tests()
{
#ifdef ENABLE_TEST
    // do some tests
#endif
}
Raildex commented 8 months ago

Then there is no benefit in using xmake test. Since I need to call the tests from my actual main. I might aswell just compile the test files directly with add_files() or create a separate target. The only benefit i get is a nice little summary in the terminal of failed/passed tests

waruqi commented 8 months ago
DOCTEST_CONFIG_DISABLE

Then there is no benefit in using xmake test. Since I need to call the tests from my actual main. I might aswell just compile the test files directly with add_files() or create a separate target. The only benefit i get is a nice little summary in the terminal of failed/passed tests

try this patch

https://github.com/xmake-io/xmake/pull/4359

add_rules("mode.debug", "mode.release")

add_requires("doctest")

target("doctest")
    set_kind("binary")
    add_files("src/*.cpp")
    for _, testfile in ipairs(os.files("tests/*.cpp")) do
        add_tests(path.basename(testfile), {
            files = testfile,
            remove_files = "src/main.cpp",
            languages = "c++11",
            packages = "doctest",
            defines = "DOCTEST_CONFIG_IMPLEMENT_WITH_MAIN"})
    end
ruki-2:doctest ruki$ xmake test
running tests ...
[ 50%]: doctest/test_1 .................................... failed 0.009s
[100%]: doctest/test_2 .................................... passed 0.009s

50% tests passed, 1 tests failed out of 2, spent 0.019s
ruki-2:doctest ruki$ xmake test -v
running tests ...
[ 50%]: doctest/test_1 .................................... failed 0.026s
[doctest] doctest version is "2.4.11"
[doctest] run with "--help" for options
===============================================================================
tests/test_1.cpp:7:
TEST CASE:  testing the factorial function

tests/test_1.cpp:8: ERROR: CHECK( factorial(1) == 10 ) is NOT correct!
  values: CHECK( 1 == 10 )

===============================================================================
[doctest] test cases: 1 | 0 passed | 1 failed | 0 skipped
[doctest] assertions: 4 | 3 passed | 1 failed |
[doctest] Status: FAILURE!

run failed, exit code: 1
[100%]: doctest/test_2 .................................... passed 0.010s

50% tests passed, 1 tests failed out of 2, spent 0.038s
waruqi commented 8 months ago

Thanks, that interface is perfect! One thing is if there's just a static or shared library with inline tests then xmake reports:

➜  xmake test
nothing to test

I would expect the add_tests to create a binary:

-- just now, add_tests does not create a binary with this:
target("foo")
    set_kind("shared")
    add_files("src/foo.cpp")
    add_tests("foo_test", { files = "tests/doctest_stub.cpp" })
    add_packages("doctest")

-- when it should do something like this with the library target:
target("foo_test")
    set_kind("binary")
    add_files("src/foo.cpp")
    add_files("tests/doctest_stub.cpp")
    add_packages("doctest")

try


target("doctest_shared")
    set_kind("shared")
    add_files("src/foo.cpp")
    for _, testfile in ipairs(os.files("tests/*.cpp")) do
        add_tests(path.basename(testfile), {
            kind = "binary",
            files = testfile,
            languages = "c++11",
            packages = "doctest",
            defines = "DOCTEST_CONFIG_IMPLEMENT_WITH_MAIN"})
    end
Raildex commented 8 months ago

@waruqi that is better! But wouldn't it make more sense to make files exclusive, not inclusive? i.e. add_tests("Foo", {files = "tests/foo.cpp", "tests/bar.cpp"}). In this case only foo.cpp and bar.cpp are compiled and linked together when using xmake test, where one of these files contains a main(). I think it would make things more decoupled, because you can be sure that the test runner only compiles what it needs.

waruqi commented 8 months ago

@waruqi that is better! But wouldn't it make more sense to make files exclusive, not inclusive? i.e. add_tests("Foo", {files = "tests/foo.cpp", "tests/bar.cpp"}). In this case only foo.cpp and bar.cpp are compiled and linked together when using xmake test, where one of these files contains a main(). I think it would make things more decoupled, because you can be sure that the test runner only compiles what it needs.

no, why not use target to do it? and maybe foo.cpp will call some functions from other cpp files

Raildex commented 8 months ago

Because a separate target doesn't get run with xmake test :)

Thats why i suggest specifying the files the tests need explicitly.

Currently, when my test needs a single file, everything else gets compiled aswell.

But to be honest, I can live with excluding the file that contains main() :)

paul-reilly commented 8 months ago

Thanks, this is getting close to being able to do everything.

It would be nice if the files parameter for add_tests was more flexible:

Maybe have add_files, files and remove_files for 100% logical and semantic consistency

Also one more option:

I notice that we can't use the r command line argument with xmake test, so we can't:

xmake test -rv
waruqi commented 8 months ago

Because a separate target doesn't get run with :)xmake test

separate target + add_files + add_tests (without files)

waruqi commented 8 months ago

I notice that we can't use the command line argument with , so we can't:rxmake test

done

xmake test -rv
waruqi commented 8 months ago

undefines

undefines will add -Uxxx, it will not undefine the previous macros. -Dxxx -Uxxx

or : specifically to remove remove_definesDOCTEST_CONFIG_DISABLE

The remove_xxx interface would take me a lot of time to refactor something, and I'm not considering implementing it in the current version for now.

Raildex commented 8 months ago

Because a separate target doesn't get run with :)xmake test

separate target + add_files + add_tests (without files)

I have lots of dependencies. how do I keep them in sync?

paul-reilly commented 8 months ago

undefines will add -Uxxx, it will not undefine the previous macros. -Dxxx -Uxxx

I have tested on Compiler Explorer: https://godbolt.org/z/443qovc3e

-Dxxx -Uxxx successfully undefines the previous definition. No warnings with gcc or clang with -Wall -Wextra, MSVC generates a D9025 warning about removing a previously defined macro.

Raildex commented 8 months ago

I am not sure how much of that is possible, but: I am using the xmake vscode extension. Is there a way to redirect the output of the test executables to the terminal? Currently I only see how many tests failed, but not which tests actually failed, when using a testing framework. image

When running the executable manually, I receive the proper testing output : image

waruqi commented 8 months ago

I am not sure how much of that is possible, but: I am using the xmake vscode extension. Is there a way to redirect the output of the test executables to the terminal? Currently I only see how many tests failed, but not which tests actually failed, when using a testing framework. image

When running the executable manually, I receive the proper testing output : image

xmake test -v

https://github.com/xmake-io/xmake/issues/3381#issuecomment-1792507498

paul-reilly commented 8 months ago

xmake test -v

#3381 (comment)

With xmake -v I do not get the output from doctest:

➜  dtest xmake test -rv
[ 20%]: cache compiling.release src/foo.cpp
/usr/bin/gcc -c -m64 -O3 -isystem /home/dev/.xmake/packages/d/doctest/2.4.11/0a52cbf458ea46be8a84f4e85f2e0a21/include -DNDEBUG -DDOCTEST_CONFIG_IMPLEMENT_WITH_MAIN -o build/.objs/dtest_dtest/linux/x86_64/release/src/foo.cpp.o src/foo.cpp
[ 40%]: cache compiling.release src/bar.cpp
/usr/bin/gcc -c -m64 -O3 -isystem /home/dev/.xmake/packages/d/doctest/2.4.11/0a52cbf458ea46be8a84f4e85f2e0a21/include -DNDEBUG -o build/.objs/dtest_dtest/linux/x86_64/release/src/bar.cpp.o src/bar.cpp
[ 60%]: linking.release dtest_dtest
/usr/bin/g++ -o build/linux/x86_64/release/dtest_dtest build/.objs/dtest_dtest/linux/x86_64/release/src/foo.cpp.o build/.objs/dtest_dtest/linux/x86_64/release/src/bar.cpp.o -m64 -s -lfmt
running tests ...
[100%]: dtest/dtest .................................... passed 0.005s

100% tests passed, 0 tests failed out of 1, spent 0.005s
➜  dtest ./build/linux/x86_64/release/dtest_dtest
[doctest] doctest version is "2.4.11"
[doctest] run with "--help" for options
hello===============================================================================
[doctest] test cases: 3 | 3 passed | 0 failed | 0 skipped
[doctest] assertions: 3 | 3 passed | 0 failed |
[doctest] Status: SUCCESS!
➜  dtest xmake test -v 
running tests ...
[100%]: dtest/dtest .................................... passed 0.003s

100% tests passed, 0 tests failed out of 1, spent 0.004s
➜  dtest 
waruqi commented 8 months ago

It works for me.

ruki-2:doctest ruki$ xmake test -v
running tests ...
[ 25%]: doctest/test_1        .................................... failed 0.007s
[doctest] doctest version is "2.4.11"
[doctest] run with "--help" for options
===============================================================================
tests/test_1.cpp:7:
TEST CASE:  testing the factorial function

tests/test_1.cpp:8: ERROR: CHECK( factorial(1) == 10 ) is NOT correct!
  values: CHECK( 1 == 10 )

===============================================================================
[doctest] test cases: 1 | 0 passed | 1 failed | 0 skipped
[doctest] assertions: 4 | 3 passed | 1 failed |
[doctest] Status: FAILURE!

run failed, exit code: 1
[ 50%]: doctest/test_2        .................................... passed 0.006s
[ 75%]: doctest_shared/test_1 .................................... failed 0.007s
[doctest] doctest version is "2.4.11"
[doctest] run with "--help" for options
===============================================================================
tests/test_1.cpp:7:
TEST CASE:  testing the factorial function

tests/test_1.cpp:8: ERROR: CHECK( factorial(1) == 10 ) is NOT correct!
  values: CHECK( 1 == 10 )

===============================================================================
[doctest] test cases: 1 | 0 passed | 1 failed | 0 skipped
[doctest] assertions: 4 | 3 passed | 1 failed |
[doctest] Status: FAILURE!

run failed, exit code: 1
[100%]: doctest_shared/test_2 .................................... passed 0.006s

50% tests passed, 2 tests failed out of 4, spent 0.029s
waruqi commented 8 months ago

With I do not get the output from doctest:xmake -v

this is doctest's output

➜ dtest ./build/linux/x86_64/release/dtest_dtest [doctest] doctest version is "2.4.11" [doctest] run with "--help" for options hello=============================================================================== [doctest] test cases: 3 | 3 passed | 0 failed | 0 skipped [doctest] assertions: 3 | 3 passed | 0 failed | [doctest] Status: SUCCESS!

paul-reilly commented 8 months ago

It works for me.

Hmm, could you show me the output of xmake test -rv?

waruqi commented 8 months ago
ruki-2:doctest ruki$ xmake test -rv
[ 20%]: cache compiling.release src/foo.cpp
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang -c
-Qunused-arguments -target x86_64-apple-macos14.0 -isysroot /Applications/Xcode.app/Contents/Dev
eloper/Platforms/MacOSX.platform/Developer/SDKs/MacOSX14.0.sdk -fvisibility=hidden -fvisibility-
inlines-hidden -O3 -isystem /Users/ruki/.xmake/packages/d/doctest/2.4.11/86c94a195e634ffa9c445c5
ce95a0c8f/include -DNDEBUG -o build/.objs/doctest_test_2/macosx/x86_64/release/src/foo.cpp.o src
/foo.cpp
checking for flags (-MMD -MF) ... ok
checking for flags (-fdiagnostics-color=always) ... ok
[ 25%]: cache compiling.release tests/test_2.cpp
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang -c
-Qunused-arguments -target x86_64-apple-macos14.0 -isysroot /Applications/Xcode.app/Contents/Dev
eloper/Platforms/MacOSX.platform/Developer/SDKs/MacOSX14.0.sdk -fvisibility=hidden -fvisibility-
inlines-hidden -O3 -isystem /Users/ruki/.xmake/packages/d/doctest/2.4.11/86c94a195e634ffa9c445c5
ce95a0c8f/include -DNDEBUG -std=c++11 -DDOCTEST_CONFIG_IMPLEMENT_WITH_MAIN -o build/.objs/doctes
t_test_2/macosx/x86_64/release/tests/test_2.cpp.o tests/test_2.cpp
[ 30%]: cache compiling.release src/foo.cpp
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang -c
-Qunused-arguments -target x86_64-apple-macos14.0 -isysroot /Applications/Xcode.app/Contents/Dev
eloper/Platforms/MacOSX.platform/Developer/SDKs/MacOSX14.0.sdk -fvisibility=hidden -fvisibility-
inlines-hidden -O3 -isystem /Users/ruki/.xmake/packages/d/doctest/2.4.11/86c94a195e634ffa9c445c5
ce95a0c8f/include -DNDEBUG -o build/.objs/doctest_test_1/macosx/x86_64/release/src/foo.cpp.o src
/foo.cpp
[ 35%]: cache compiling.release tests/test_1.cpp
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang -c
-Qunused-arguments -target x86_64-apple-macos14.0 -isysroot /Applications/Xcode.app/Contents/Dev
eloper/Platforms/MacOSX.platform/Developer/SDKs/MacOSX14.0.sdk -fvisibility=hidden -fvisibility-
inlines-hidden -O3 -isystem /Users/ruki/.xmake/packages/d/doctest/2.4.11/86c94a195e634ffa9c445c5
ce95a0c8f/include -DNDEBUG -std=c++11 -DDOCTEST_CONFIG_IMPLEMENT_WITH_MAIN -o build/.objs/doctes
t_test_1/macosx/x86_64/release/tests/test_1.cpp.o tests/test_1.cpp
[ 40%]: cache compiling.release src/foo.cpp
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang -c
-Qunused-arguments -target x86_64-apple-macos14.0 -isysroot /Applications/Xcode.app/Contents/Dev
eloper/Platforms/MacOSX.platform/Developer/SDKs/MacOSX14.0.sdk -O3 -isystem /Users/ruki/.xmake/p
ackages/d/doctest/2.4.11/86c94a195e634ffa9c445c5ce95a0c8f/include -DNDEBUG -o build/.objs/doctes
t_shared_test_1/macosx/x86_64/release/src/foo.cpp.o src/foo.cpp
[ 45%]: cache compiling.release tests/test_1.cpp
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang -c
-Qunused-arguments -target x86_64-apple-macos14.0 -isysroot /Applications/Xcode.app/Contents/Dev
eloper/Platforms/MacOSX.platform/Developer/SDKs/MacOSX14.0.sdk -O3 -isystem /Users/ruki/.xmake/p
ackages/d/doctest/2.4.11/86c94a195e634ffa9c445c5ce95a0c8f/include -DNDEBUG -std=c++11 -DDOCTEST_
CONFIG_IMPLEMENT_WITH_MAIN -o build/.objs/doctest_shared_test_1/macosx/x86_64/release/tests/test
_1.cpp.o tests/test_1.cpp
[ 50%]: cache compiling.release src/foo.cpp
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang -c
-Qunused-arguments -target x86_64-apple-macos14.0 -isysroot /Applications/Xcode.app/Contents/Dev
eloper/Platforms/MacOSX.platform/Developer/SDKs/MacOSX14.0.sdk -O3 -isystem /Users/ruki/.xmake/p
ackages/d/doctest/2.4.11/86c94a195e634ffa9c445c5ce95a0c8f/include -DNDEBUG -o build/.objs/doctes
t_shared_test_2/macosx/x86_64/release/src/foo.cpp.o src/foo.cpp
[ 55%]: cache compiling.release tests/test_2.cpp
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang -c
-Qunused-arguments -target x86_64-apple-macos14.0 -isysroot /Applications/Xcode.app/Contents/Dev
eloper/Platforms/MacOSX.platform/Developer/SDKs/MacOSX14.0.sdk -O3 -isystem /Users/ruki/.xmake/p
ackages/d/doctest/2.4.11/86c94a195e634ffa9c445c5ce95a0c8f/include -DNDEBUG -std=c++11 -DDOCTEST_
CONFIG_IMPLEMENT_WITH_MAIN -o build/.objs/doctest_shared_test_2/macosx/x86_64/release/tests/test
_2.cpp.o tests/test_2.cpp
[ 60%]: linking.release doctest_test_2
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++ -
o build/macosx/x86_64/release/doctest_test_2 build/.objs/doctest_test_2/macosx/x86_64/release/sr
c/foo.cpp.o build/.objs/doctest_test_2/macosx/x86_64/release/tests/test_2.cpp.o -target x86_64-a
pple-macos14.0 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/De
veloper/SDKs/MacOSX14.0.sdk -stdlib=libc++ -lz -Wl,-x -Wl,-dead_strip
[ 65%]: linking.release doctest_test_1
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++ -
o build/macosx/x86_64/release/doctest_test_1 build/.objs/doctest_test_1/macosx/x86_64/release/sr
c/foo.cpp.o build/.objs/doctest_test_1/macosx/x86_64/release/tests/test_1.cpp.o -target x86_64-a
pple-macos14.0 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/De
veloper/SDKs/MacOSX14.0.sdk -stdlib=libc++ -lz -Wl,-x -Wl,-dead_strip
[ 70%]: linking.release doctest_shared_test_1
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++ -
o build/macosx/x86_64/release/doctest_shared_test_1 build/.objs/doctest_shared_test_1/macosx/x86
_64/release/src/foo.cpp.o build/.objs/doctest_shared_test_1/macosx/x86_64/release/tests/test_1.c
pp.o -target x86_64-apple-macos14.0 -isysroot /Applications/Xcode.app/Contents/Developer/Platfor
ms/MacOSX.platform/Developer/SDKs/MacOSX14.0.sdk -stdlib=libc++ -lz -Wl,-x -Wl,-dead_strip
[ 75%]: linking.release doctest_shared_test_2
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++ -
o build/macosx/x86_64/release/doctest_shared_test_2 build/.objs/doctest_shared_test_2/macosx/x86
_64/release/src/foo.cpp.o build/.objs/doctest_shared_test_2/macosx/x86_64/release/tests/test_2.c
pp.o -target x86_64-apple-macos14.0 -isysroot /Applications/Xcode.app/Contents/Developer/Platfor
ms/MacOSX.platform/Developer/SDKs/MacOSX14.0.sdk -stdlib=libc++ -lz -Wl,-x -Wl,-dead_strip
running tests ...
[ 25%]: doctest/test_1        .................................... failed 0.008s
[doctest] doctest version is "2.4.11"
[doctest] run with "--help" for options
===============================================================================
tests/test_1.cpp:7:
TEST CASE:  testing the factorial function

tests/test_1.cpp:8: ERROR: CHECK( factorial(1) == 10 ) is NOT correct!
  values: CHECK( 1 == 10 )

===============================================================================
[doctest] test cases: 1 | 0 passed | 1 failed | 0 skipped
[doctest] assertions: 4 | 3 passed | 1 failed |
[doctest] Status: FAILURE!

run failed, exit code: 1
[ 50%]: doctest/test_2        .................................... passed 0.007s
[ 75%]: doctest_shared/test_1 .................................... failed 0.008s
[doctest] doctest version is "2.4.11"
[doctest] run with "--help" for options
===============================================================================
tests/test_1.cpp:7:
TEST CASE:  testing the factorial function

tests/test_1.cpp:8: ERROR: CHECK( factorial(1) == 10 ) is NOT correct!
  values: CHECK( 1 == 10 )

===============================================================================
[doctest] test cases: 1 | 0 passed | 1 failed | 0 skipped
[doctest] assertions: 4 | 3 passed | 1 failed |
[doctest] Status: FAILURE!

run failed, exit code: 1
[100%]: doctest_shared/test_2 .................................... passed 0.009s

50% tests passed, 2 tests failed out of 4, spent 0.047s
paul-reilly commented 8 months ago

I see that we are using it differently.

➜  dtest tree .
.
├── build
│   └── linux
│       └── x86_64
│           └── release
│               ├── dtest_dtest
│               └── libdtest.so
├── compile_commands.json
├── src
│   ├── bar.cpp
│   ├── foo.cpp
│   └── foo.hpp
└── xmake.lua
add_rules("mode.debug", "mode.release")

add_requires("doctest", "fmt")

target("dtest")
    set_kind("shared")
    add_files("src/*.cpp")
    --add_defines("DOCTEST_CONFIG_DISABLE") -- need 'undefine' support
    add_tests("dtest", {
        kind = "binary",
        files = "src/foo.cpp", --<-- should not need this, tests are in all files
        defines = "DOCTEST_CONFIG_IMPLEMENT_WITH_MAIN"
    })
    add_packages("doctest", "fmt")

The tests are in the same files as the rest of the code. We only need:

or

For this we also do not need the files parameter. I notice that when I leave that out the test target reverts to being a library.

waruqi commented 8 months ago

I see that we are using it differently.

➜  dtest tree .
.
├── build
│   └── linux
│       └── x86_64
│           └── release
│               ├── dtest_dtest
│               └── libdtest.so
├── compile_commands.json
├── src
│   ├── bar.cpp
│   ├── foo.cpp
│   └── foo.hpp
└── xmake.lua
add_rules("mode.debug", "mode.release")

add_requires("doctest", "fmt")

target("dtest")
    set_kind("shared")
    add_files("src/*.cpp")
    --add_defines("DOCTEST_CONFIG_DISABLE") -- need 'undefine' support
    add_tests("dtest", {
        kind = "binary",
        files = "src/foo.cpp", --<-- should not need this, tests are in all files
        defines = "DOCTEST_CONFIG_IMPLEMENT_WITH_MAIN"
    })
    add_packages("doctest", "fmt")

The tests are in the same files as the rest of the code. We only need:

  • define + undefine

or

  • a separate file added to the test that adds the main entry point + undefine

For this we also do not need the files parameter. I notice that when I leave that out the test target reverts to being a library.

   files = "src/foo.cpp", --<-- should not need this, tests are in all files

I don't recommend this, it would result in recompiling all the cpp's for each test, this can be very slow if the target has a lot of cpp files.

and it's very time consuming to implement. I wouldn't consider it for now.

paul-reilly commented 8 months ago

I understand.

It would be fantastic to just have undefines included for now so that DOCTEST_CONFIG_DISABLE can be removed (as tested in a previous post: https://github.com/xmake-io/xmake/issues/3381#issuecomment-1793563849). This could be useful for other purposes too.