local training. E.g. for scikit-learn models this would most often be faster
submissions on the server, where currently resources are not optimally used. For instance to avoid CPU oversubsciption we reserve some number of CPU for each worker (via CPU affinity). Then for submissions that don't use multi-processing or threading this results in unused resources. Even for submissions that have some level of parallelism via BLAS for parts of the code, running cross-validation in parallel would likely be an improvement.
It would be interesting if it was possible to run cross-validation in parallel. This was also requested by @zhangJianfeng in https://github.com/paris-saclay-cds/ramp-workflow/issues/250#issuecomment-723917761
There are two use-case here,
There are two potential issues,
In any case having this as a CLI option (disabled by default) for
ramp-test
could be a start.