Once the other issues are addressed, I propose that we decide how we split the data into train/val/test, and then run some hyperparameter searches for the supervised learning of:
local model:
number of layers
units per layer
learning rate
2D UNet
number of blocks
start channels
batch norm
learning rate
kernel size?
block depth?
3D Unet
as above
Perhaps all with learning rate decay as well.
This will be an interesting comparison in itself, will hopefully give us a model that performs well at least on similar samples.
It should close the supervised part and be a good starting point for unsupervised fine tuning.
Once the other issues are addressed, I propose that we decide how we split the data into train/val/test, and then run some hyperparameter searches for the supervised learning of:
Perhaps all with learning rate decay as well.
This will be an interesting comparison in itself, will hopefully give us a model that performs well at least on similar samples. It should close the supervised part and be a good starting point for unsupervised fine tuning.