One slightly annoying thing with weights and biases is that you cannot configure the plots in the code, you have to do it in their dashboard. It does remember what you did for subsequent runs, but what I'm not sure about is if someone else in the project starts a run, whether it will still remember my layout.
I think this is done, awaiting discussion with Floriana and Hongyang on potential other metrics they would like to see.
The next step will be to add a U-Net and try to actually get good performance on the supervised task.
This PR implements several metrics to give insight into model training and performance:
See e.g. here for an example.
One slightly annoying thing with weights and biases is that you cannot configure the plots in the code, you have to do it in their dashboard. It does remember what you did for subsequent runs, but what I'm not sure about is if someone else in the project starts a run, whether it will still remember my layout.
I think this is done, awaiting discussion with Floriana and Hongyang on potential other metrics they would like to see.
The next step will be to add a U-Net and try to actually get good performance on the supervised task.