FLAMEGPU / FLAMEGPU2

FLAME GPU 2 is a GPU accelerated agent based modelling framework for CUDA C++ and Python
https://flamegpu.com
MIT License
106 stars 21 forks source link

Use Ctest alongside googletest and python tests #267

Open ptheywood opened 4 years ago

ptheywood commented 4 years ago

As RTC / python support is going to require it's own test suite, being able to run all tests with a single command (or visual studio project) is probably worthwhile.

This would involve:

This may have downsides

Cmake does appear to be google test aware, so it may be possible to use dynamic test case discovery, although this seems like it might impose some restrictions. https://blog.kitware.com/dynamic-google-test-discovery-in-cmake-3-10/ https://cmake.org/cmake/help/git-stage/module/GoogleTest.html

Edit: Using cmakes googletest detection massivly increases test runtime due to cuda context creation (each test is ran via --gtest_filter="test.name"). This doesn't seem worth the benefits.

ptheywood commented 4 years ago

Given that we require cmake 3.12, we can use fetchContent to isntall google test in place of the macro:


include(FetchContent)

FetchContent_Declare(
  googletest
  GIT_REPOSITORY https://github.com/google/googletest.git
  GIT_TAG        release-1.8.0
)

FetchContent_GetProperties(googletest)
if(NOT googletest_POPULATED)
  FetchContent_Populate(googletest)
  add_subdirectory(${googletest_SOURCE_DIR} ${googletest_BINARY_DIR})
endif()

include(GoogleTest)

Although this currently leads to a warning at make time.

Then by adding enable_testing() in teh root cmake where appropriate, and adding the test executable to ctest via:

# Add as a target to ctest?
gtest_discover_tests(
    "${PROJECT_NAME}"
    WORKING_DIRECTORY ${PROJECT_DIR}
    PROPERTIES VS_DEBUGGER_WORKING_DIRECTORY "${PROJECT_DIR}"
)

It's possible to run the tests with ctest.

However Ctest runs each test individually (I assume with a gtest_filter).

This is slow as it results in cuda contest initialisation per test, rather than once per suite. It'll also mask bugs from running multiple tests in one suite (i.e. where stuff doesn't get cleaned up propperly).

I.e. the tests (without rtc) takes 135.42 seconds on one cpu core via ctest, where as running ./bin/linux-x64/Release/tests takes 722 ms to run those same tests...

Edit: It is however possible to run multiple tests concurrently using different cpu cores, (via ctest -j4 for 4 concurrent tests), however as the same device is used this might fall over for large tests. In this case it took 37.49 sec to run the tests (exludint rtc). I.e. 3.6x faster

Robadob commented 4 years ago

The macro is literally from the gtest readme.

On Mon, 18 May 2020 at 13:28, Peter Heywood notifications@github.com wrote:

Given that we require cmake 3.12, we can use fetchContent to isntall google test in place of the macro:

include(FetchContent)

FetchContent_Declare( googletest GIT_REPOSITORY https://github.com/google/googletest.git GIT_TAG release-1.8.0 )

FetchContent_GetProperties(googletest) if(NOT googletest_POPULATED) FetchContent_Populate(googletest) add_subdirectory(${googletest_SOURCE_DIR} ${googletest_BINARY_DIR}) endif()

include(GoogleTest)

Although this currently leads to a warning at make time.

Then by adding enable_testing() in teh root cmake where appropriate, and adding the test executable to ctest via:

Add as a target to ctest?

gtest_discover_tests( "${PROJECT_NAME}" WORKING_DIRECTORY ${PROJECT_DIR} PROPERTIES VS_DEBUGGER_WORKING_DIRECTORY "${PROJECT_DIR}" )

It's possible to run the tests with ctest.

However Ctest runs each test individually (I assume with a gtest_filter).

This is slow as it results in cuda contest initialisation per test, rather than once per suite. It'll also mask bugs from running multiple tests in one suite (i.e. where stuff doesn't get cleaned up propperly).

I.e. the tests (without rtc) takes 135.42 seconds on one cpu core via ctest, where as running ./bin/linux-x64/Release/tests takes 722 ms to run those same tests...

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/FLAMEGPU/FLAMEGPU2_dev/issues/267#issuecomment-630149360, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFVGCWHVP53KGUMZOLLHRTRSESXXANCNFSM4NA3Q4XQ .

ptheywood commented 4 years ago

The macro is literally from the gtest readme.

I'm simply highlighting a more modern cmakey way of doing things that I stumbled upon while investigating ctest.

ptheywood commented 4 years ago

It's also possible to just add our test executable to ctest via:

add_test(
    NAME ${PROJECT_NAME} 
    COMMAND "${PROJECT_NAME}"
    WORKING_DIRECTORY ${PROJECT_DIR}
    PROPERTIES VS_DEBUGGER_WORKING_DIRECTORY "${PROJECT_DIR}"
)

which is executed as a whole unit, however ctest must be run with verbose flag(s) enabled to see anything other than a pass/fail for the whole executable. I.e. ctest -VV, however as this involves stdout redirection colour is lost. This would support having multiple test suites however, and the individual test suites can still be executed.

the following gitlab issue may be relevant to improving the situation. https://gitlab.kitware.com/cmake/cmake/issues/17301

The point of the GoogleTest module is to invoke each test case within the gtest executable individually so that each test case can appear in the CTest results. As you've already worked out, it does this using --gtest_filter to run each test case as its own separate process. That is going to defeat whatever test-suite-wide fixture you try to set up within the gtest executable since each test case runs in complete isolation (and possibly in parallel). If you want to keep your test-suite-wide fixtures within the gtest executable, you probably don't want to be using the GoogleTest module. The main drawback to that is that the CTest results will then only show a single pass/fall for the whole gtest test suite in that executable.

It appears that ctest intentionally doesn't support what I want - running all the tests as a single exeuctable then post-processing the output :cry:

A possible compromise would be to generate test executable per sub suite (i.e. DeviceRTCAPITest ,Spatial3DMsgTest etc.). This would lead to ~ 40 tests in ctests eyes, (and ~ 40 0.2s cuda initialisations) as a balance between ctest and google test. If any of these fail at ctest time it would then be possible to re-run the individual test manually or with -VV to investigate further.

ptheywood commented 4 years ago

Rough plan after discussion with Rob:

For the unit tests, running ctest would then just report if the whole suite failed. Running with -vv or explicitly running the executable for finer-grained detail.

Using some form of filtering would then allow just the python tests / regression tests to be executed, with ctest likely providing a bit more detail here.

I'm also not sure how ctest plays with visual studio (Edit VS207+ appears to be ctest friendly). This makes it easy for all test suites to be executed, but still gives the options of running individual test suites, and doesn't murder test runtime.

Wosrt case we can always turn it off.

ptheywood commented 4 years ago

CTest on windows from the command line requires the configuration to be specified, i.e.

ctest . -c Release

From within Visual Studio 2019

Test > Run All Tests

Successfully built and ran the unit test suite. This is pre-swig so not investigating python testing within VS2019 as of yet.

Robadob commented 4 years ago

I think visual studio has similar built in support for google test too, "Run Unit Tests" iirc, I've not used it much though.

ptheywood commented 3 years ago

Adding ctest would also allow us to test that our static_asserts are correctly firing in cases we are blocking, such as the prevention of non-real templated type T for:

template<typename T>
void RunPlanVector::setPropertyNormalRandom(const std::string &name, const T &mean, const T &stddev)