Closed nabobalis closed 10 months ago
In GitLab by @markcheung on Sep 1, 2020, 11:07
Usually level 1.5+ data should be stored in floating point. Casting to int16
after interpolation would lead to loss in precision. float64
may be overkill. float32
ought to suffice.
In GitLab by @markcheung on Sep 1, 2020, 11:08
The user who calls aia_register (and optionally, deconv) can also store the processes images in int to save space, but int shouldn't be the output of the routines.
In GitLab by @cbard on Sep 1, 2020, 12:16
Gotcha. This is relevant to GPU computations in general since current-gen GPUs are about 1.5-2x faster in single precision (32 bit) than double.
In GitLab by @wtbarnes on Sep 3, 2020, 07:42
In GitLab by @wtbarnes on Sep 6, 2020, 14:04
I'm going to go ahead and close this for now. As far as saving to disk goes, if the user wants to, it is easy enough to cast the data array to 32 or 16 bit before saving. As far as calculations go, I'd prefer to not do any explicit type conversions, e.g.
sunpy.map.Map(m.data.astype(np.float32),m.meta).save('my-map-in-single-precision.fits')
The 32-versus-64 bit discussion in the context of the affine transform is discussed specifically in the sunpy issue I've linked above.
In GitLab by @cbard on Sep 1, 2020, 08:22
A question that came up during the Helio Hackweek AIA project was using 64-bit vs. 16-bit calculations. For a level 1 -> level 1.6 (level 2?) psf+prep pipeline, the
psf
andregister
routines automatically cast the input map data (usually inint16
) tofloat64
. My impression is that the final map data does not need to be saved in 64-bit; it should be cast back to the original headerBITPIX
(usually 16) before saving.Since the level 1 data is
int16
and the final, processed data is alsoint16
, should the intermediate calculations be inint16
? This would be a source of speedup to do the calculations inint16
or evenfloat32
rather thanfloat64
.