MPoL-dev / MPoL

A flexible Python platform for Regularized Maximum Likelihood imaging
https://mpol-dev.github.io/MPoL/
MIT License
33 stars 11 forks source link

Allow single/float32 precision tensor types #254

Open iancze opened 6 months ago

iancze commented 6 months ago

Is your feature request related to a problem or opportunity? Please describe. In its current form (v0.2), MPoL uses float64 (or complex128) tensor types everywhere. This is because very early in MPoL development, I made the decision for core modules like BaseCube to use tensors of this type. All of the downstream code then builds on tensors of this type. If I recall correctly, I think I made the decision to use float64 because I had some divergent optimisation loops with float32 and I thought loss of precision was at fault because of the large dynamic range of astronomical images. With a few years of understanding between now and then, it seems more likely that the optimisation simply went awry because of a bad learning rate and finicky network architecture (e.g., no softplus or ln pixel mapping), but I never got to the bottom of the issue.

Describe the solution you'd like