rckirby / torchfire

Embeds Firedrake functionality into PyTorch.
MIT License
1 stars 0 forks source link

Hardcoded precision #4

Closed jonwittmer closed 2 years ago

jonwittmer commented 2 years ago

Are we sure that we want to hard-code double precision here? Most ML uses single precision at most and sometimes half precision, so it might make sense to define the API such that the user can specify what precision is desired, especially if we want to support GPU solvers in the future. Not all GPUs support double precision arithmetic in hardware.

https://github.com/rckirby/torchfire/blob/25c94565f8f9edb02e753d4e26e72052fcfef2f5/src/torchfire/torchfire.py#L15

jonwittmer commented 2 years ago

I see that this issue was already mentioned in another thread

rckirby commented 2 years ago

It was originally a limitation in fecr, fixed there and a to-do for us.