LLNL / MuyGPyS

A fast, pure python implementation of the MuyGPs Gaussian process realization and training algorithm.
Other
25 stars 11 forks source link

Need torch MuyGPyS layer to support 64 bit optimization #96

Closed bwpriest closed 1 year ago

bwpriest commented 1 year ago

We currently need to use $ export MUYGPYS_FTYPE=32 for MuyGPyS.torch.muygpys_layer to perform correctly during optimization. This is because .float() is hardcoded therein. We need to modify this behavior so that it depends on mm.ftype.

alecmdunton commented 1 year ago

This was an easy fix - I just removed the .float() calls from the code, and with the new refactoring there shouldn't be any hardcoding of this anywhere in the library.

bwpriest commented 1 year ago

Ah it's always nice when the fix is easy.

alecmdunton commented 1 year ago

Should I remove the flag in MuyGPyS.examples.muygps_torch that forces an ftype of 32?

bwpriest commented 1 year ago

I guess we can? On the other hand it seems like we should keep to the "default" for torch, whereas the default for MuyGPyS is 64 bit. I'm torn.

alecmdunton commented 1 year ago

Let's leave the flag for now.

alecmdunton commented 1 year ago

This issue was addressed in PR #146