Open broonie opened 4 years ago
@broonie At the moment, you can define whitelists, blacklists, or regular expressions to only build a subset of the configs. How would you automatically create a filter based on the "useful for runtime testing" criteria?
One way I can see to do that is to list all the defconfigs that have resulted in some tests to be run, by quering the database. Then we can derive a list of defconfigs for each architecture. But that's more a semi-automatic way of doing it, because there would still need to be a defconfig filter (probably whitelist) in the YAML.
I was thinking of something like you suggests that checks if anything has been run recently - not sure if YAML supports including files or anything but we could do something like generate a fragment periodically based on the previous day's runs.
What we could do is have something in the YAML config that asks the kci_build tool to query results from the previous run on the same branch and dynamically create the list of build variants. Something like an auto
filter with maybe some parameters to pass the function, say to specify which test results to look for e.g. all the baseline runs.
Maybe this could also be reused for tuning other things arbitrarily, to build some kernels or run tests only if tests have passed in other trees. As a random example, if the media tree failed v4l2-compliance on vivid then we could probably skip building linux-next with virtualvideo enabled as we know it will also fail. It doesn't sound too convincing, but just to give an idea of what could be done. We may also skip some tests if baseline flagged an issue that will definitely compromise the results. Say, if a drm driver failed to probe then maybe some i-g-t tests can be skipped - again, only if this helps by removing noise rather than suppressing useful information.
If it's a query on the same branch that'd not notice any new configs that get boards added - I'd guess making it a configurable branch would cover that though?
Your idea for using a tool like this as part of how we schedule does sound like a good one, we've talked about doing something like that before with having some stuff in what's now called the baseline tests that does some smoke testing to see what testsuites will usefully run on a given board so we don't add pointless load on the boards or noise in the output.
Good point, we could keep the main branches such as mainline and linux-next with a full build and use the test data they produced to determine what to build in other branches.
When specifying which configurations are built for a given tree it would be really good if there were a shorthand for saying that a given tree should be built with configurations which are useful for runtime testing - for a lot of trees the boot/test coverage is more important than the build coverage but right now you have to explicitly enumerate the configs on a per-tree basis for this scenario. This would cut down on the number of builds we need to do.