Are we sure that we want to hard-code double precision here? Most ML uses single precision at most and sometimes half precision, so it might make sense to define the API such that the user can specify what precision is desired, especially if we want to support GPU solvers in the future. Not all GPUs support double precision arithmetic in hardware.
Are we sure that we want to hard-code double precision here? Most ML uses single precision at most and sometimes half precision, so it might make sense to define the API such that the user can specify what precision is desired, especially if we want to support GPU solvers in the future. Not all GPUs support double precision arithmetic in hardware.
https://github.com/rckirby/torchfire/blob/25c94565f8f9edb02e753d4e26e72052fcfef2f5/src/torchfire/torchfire.py#L15