Open cdebom opened 4 years ago
I have found a similar issue when varying parameters that describe the training smoothly, see figure, the FOM values jump around a fair bit...
Are you using jax-cosmo
option to calculate the FoM? It should have better numerical stability.
Er probably not sorry; does tc.compute_scores just need jax-cosmo=True or something?
yes, essentially :-) @cdebom were you using the jax-cosmo version of the metrics? Because they should be more stable
Yes, it seems that this was the issue. We are using jax-cosmo now. Thanks
Dear @joezuntz and @EiffL. We found out that the resulting FOM_3x2 can be very different in multiple trainings.
For instance, if we use the RF example w/ 5% for training (100 times - 5 bins) we have found in the validation
Thus, one can find 10^4 as well as 6*10^4 using the same algorithm.
while the SNR_3x2 have variations lowers than 0.6 :
`
If, instead of using RF we add the same the redshifts in the correct bin for the validation sample we find (5 bins) FOM_3x2 lower than 3x10^4, depending upon some selection. Thus we are getting Higher values than using the truth table.
Does this make sense for you?