I referenced the spec documentation directly in the code where needed. Especifically, this conforms 100% with n-dimensional images and with the allowed sample formats. Note that UInt64, Complex32 and Complex64 won't work yet.
I also added an extra optimization in the u8_to_t!() conversion macro, that will pre-allocate the whole buffer from the beginning, instead of growing it on each new value. This can be done because we know the size of things before the execution.
When generating the FITS data, we have some extra optimizations:
We don't re-compute the bitpix value on each iteration (this was probably being optimized away in the release mode, but now the debug version should be faster too)
We now use iterators in order to avoid bounds checking on each iteration step.
Then, the data generation has also been optimized by using chunks_exact(), that should yield all chunks with the same size. Nevertheless, there is currently no check to ensure this always happens. If it doesn't, the input image wouldn't be spec conformant, and therefore the behaviour is for now undefined.
Still a ton of unwrap() in the code, that can be solved in the future by using error propagation.
This is a follow-up from #18, that goes even further with the optimizations.
This makes the
XISFData
type much smaller, and it won't grow adding more types. This will also conform better to the XISF specification:I referenced the spec documentation directly in the code where needed. Especifically, this conforms 100% with n-dimensional images and with the allowed sample formats. Note that
UInt64
,Complex32
andComplex64
won't work yet.I also added an extra optimization in the
u8_to_t!()
conversion macro, that will pre-allocate the whole buffer from the beginning, instead of growing it on each new value. This can be done because we know the size of things before the execution.When generating the FITS data, we have some extra optimizations:
bitpix
value on each iteration (this was probably being optimized away in the release mode, but now the debug version should be faster too)Then, the data generation has also been optimized by using
chunks_exact()
, that should yield all chunks with the same size. Nevertheless, there is currently no check to ensure this always happens. If it doesn't, the input image wouldn't be spec conformant, and therefore the behaviour is for now undefined.Still a ton of
unwrap()
in the code, that can be solved in the future by using error propagation.