The logging of the metrics has been weird. To fix the issue we now use our own implementation of torchmetrics.MetricCollection (which just extends the functionality of the parent class by normalising the images with the correct normaliser), .update() as well as .reset() for the corresponding torchmetrics.Metric. Latter can be seen in model.py in the functions _on_step() and _on_epoch_end().
After creating the split we need to save the indices as an np.array because we need to be able to multiply the indices. This makes sure, that every split has the correct amount of indices in regards to the amount of backgrounds and agns used in the sim dataset.
torchmetrics.MetricCollection
(which just extends the functionality of the parent class by normalising the images with the correct normaliser),.update()
as well as.reset()
for the correspondingtorchmetrics.Metric
. Latter can be seen inmodel.py
in the functions_on_step()
and_on_epoch_end()
.np.array
because we need to be able to multiply the indices. This makes sure, that every split has the correct amount of indices in regards to the amount of backgrounds and agns used in the sim dataset.