A bugfix to properly save the production results (right now the comprehension results are also being saved as the production results).
Adding the option to compare the current OLS implementation to himalaya with a very small alpha. The results are approximately the same between the two methods and running himalaya is faster (I think because of the GPU acceleration), though it feels weird and hacky to use a very small alpha to implement ~OLS.
Using the himalaya torch_cuda backend if and only if there are GPUs available.
Sorry I stuffed a bunch of things into this PR, but will keep them more focused in the future.
This PR implements the following changes:
Sorry I stuffed a bunch of things into this PR, but will keep them more focused in the future.