Closed ErikOsinga closed 1 month ago
Actually, after trying it, it seems that initialising as NaN values does not work nicely with the functionrenormalize_StokesI_model
(I think only in the case of fit==log but not sure..) because then any renormalisation will propagate NaN values (instead of ignoring 0.0 values by virtue of multiplying by 0.0) causing all rescaled coefficient maps to become NaN.
I guess the only change has to be the calculation of max_order
, checking if the plane doesn't contain all-(zero/nan.):
max_order = np.sum ( np.any( (new_coeffs != 0.0) & (~np.isnan(new_coeffs)),axis=(1,2) )) - 1
In rescale_I_model_3D.py the coefficient maps are initialised as a zero-array which is filled up to the order that was fit. (see
read_data
).coeffs = np.zeros((6, *old_reffreq_map.shape))
This produces some empty (0.0 planes) and some planes that have actual coefficients.
Presumably, the user calls
rescale_I_model_3D
after to rescale the coefficients. However, if the input Stokes I cube contains NaN pixels (e.g. if a part of the image is not missing because it's at the edge of the POSSUM boundary), the new coefficient maps will contain both zeros and NaNs.Then later, in
write_new_parameters
, the max-order is determined as follows:max_order = np.sum(np.any(new_coeffs != 0.0, axis=(1, 2))) - 1)
Which will then always return the highest possible order: 5. (+1 = 6)
Thus, many useless files are made which end in
*_newcoeff*.fits
I think initialising the maps as NaNs and then checking whether any pixels are finite instead of whether any pixels are not 0.0 is probably enough to fix it. (But the code is a bit hard to read)