Open Crzyrndm opened 6 years ago
The benchmarks could use further work to make the results more meaningful (clarify purpose of each benchmark, thresholds for good/bad (relative to a baseline), review to ensure measurements are relevant, etc.).
I would also recommend adding a micro-bench library (google-benchmark, celero, etc.) to be used for justification of design choices (e.g. map vs. unordered_map vs. vector) when neccesary in PRs, although these could be disabled once merged.
Samples/examples are where many users will start on how to make use of this library. It is extremely frustrating for users when these contain errors of any sort and the current CI will only catch build errors. Benchmarks also need to be run on a set of environments to ensure major performance improvements/regressions are properly validated/isolated.
The current cmake setup creates an executable for every sample/benchmark. CI contains no scripting to search for the executable thus the current situation is that they are checked only for compilation, not execution errors. Additionally, if all benchmarks were designed with a relative baseline for comparison, there is no reason they could not also be run as a part of the CI builds
Samples are generally structured as a single file with main etc., so the solution here should probably be manual additions to the CI scripts to execute the built executable. Benchmarks on the other hand should be treated like tests in my experience (one main, multiple benchmarks, registration, etc.)