Closed ciphercharly closed 4 years ago
Hi @ciphercharly, to configure whether or not the data is labeled, we can change the value of labeled
to false
under dataset
. e.g. https://github.com/DeepRegNet/DeepReg/blob/main/config/test/paired_h5.yaml#L8.
If you do such modification in config/test/paired_h5.yaml
, then running the following command will provide a working test run. (assuming you've run pip install -e .
to install DeepReg)
deepreg_train -g "" -c config/test/paired_h5.yaml config/test/ddf.yaml
Please have a try and see if this works for you. We will consider how to improve the documentation to make it clearer ;)
thanks! yeah, I use labeled = False in my custom config, will try the test run too asap
I have been trying different combinations of lncc and gmi with ddf and both local and unet as network backbones, with no particular success so far, the only consistent observation is that the learning rate parameter matters a lot and that the regularization weight needs to be larger than the data-term loss weight for the deformations not to be totally off.. but I am struggling to find a promising setup
what would be the best setting a priori? any advice?
I'm afraid there is no effective guidelines between these options just yet, all remaining research questions.
We also found that learning rate and regularisation weight affect the registration network training substantially - perhaps more than they should, compared with other "more supervised" non-registration classification tasks. Some of these impact will diminish if you train longer, we regularly train registration networks for days (if not weeks) on high-performance GPUs.
The deformation regularisation weights being numerically larger than other weights should not be a problem at all, I use 100x or 1e4x larger for some applications. It depends on particular implementations, e.g. using sum/mean, finite different sampling etc... I would not worry too much about this at all.
In summary, though not very helpful, one just need to find a good set of those hyperparameters work for his/her applications unless other underlying problems indicated.
OH... I took this issue as a bug report issue lol... Seems like it might be more related to the performance.
yeah finding a good set of hyperparameters seems to be hard... I will try some more, they want me to try other methods but I think it's worth sticking around do you think it's worth running some classical registration trials to see how it works in that case and compare? the demos for the 'classical' scenarios are written for labeled data and in h5py format but I could customize one of them to load nifty and for an unlabeled case perhaps?
yeah finding a good set of hyperparameters seems to be hard... I will try some more, they want me to try other methods but I think it's worth sticking around do you think it's worth running some classical registration trials to see how it works in that case and compare? the demos for the 'classical' scenarios are written for labeled data and in h5py format but I could customize one of them to load nifty and for an unlabeled case perhaps?
@YipengHu Do you have any suggestions here ;)
Regarding the hyper parameters, this is a common problem in deep learning, we are considering a potential new feature regarding auto tuning in #476, however it will probably not happen this year.
yeah finding a good set of hyperparameters seems to be hard... I will try some more, they want me to try other methods but I think it's worth sticking around do you think it's worth running some classical registration trials to see how it works in that case and compare? the demos for the 'classical' scenarios are written for labeled data and in h5py format but I could customize one of them to load nifty and for an unlabeled case perhaps?
@YipengHu Do you have any suggestions here ;)
Regarding the hyper parameters, this is a common problem in deep learning, we are considering a potential new feature regarding auto tuning in #476, however it will probably not happen this year.
sorry overlooked this - yes, the classical algorithms definitely worth trying - it also runs for a while - let me know if any problem
To wrap up, @ciphercharly would you like to use the demo with classical methods for labeled data, to build another demo for unlabeled data? I suggest to close this issue and open a new ticket dedicated for that demo ;) the contribution is definitely welcome.
thanks for the answers! yes this can be closed. I will look into trying the classical scenario for unlabeled data
setup for paired unlabeled data
I noticed that the demos for the paired scenarios do not cover the case where the data is unlabeled, and I was wondering if there's still hope to use DeepReg effectively in that case and which network architecture/ loss function config would be the best:
I am trying to register T2 channel images to corresponding vibe channel images for each patient, so I picked the paired scenario, intensity-based loss and especially ssd is not of great help in this case, I have been trying different combinations of lncc and gmi with ddf and both local and unet as network backbones, with no particular success so far, the only consistent observation is that the learning rate parameter matters a lot and that the regularization weight needs to be larger than the data-term loss weight for the deformations not to be totally off.. but I am struggling to find a promising setup
what would be the best setting a priori? any advice?