Closed RonanENX closed 4 years ago
Hello @RonanENX
I believe it would be an exciting project to adapt our self-supervised learning framework to a generic model for PET modality (Genesis PET). Although we have not worked with PET images closely, we conducted several experiments on the cross-modality transfer learning and observed some useful phenomenal. More investigations are needed to form a conclusive "advise", so please shot me an email if you would like to build a closer collaboration.
Here are some quick answers for the subset of your questions:
For how many epochs did you train your models? We adopted the early-stop mechanism with patience = 50. Usually, the validation loss will stop decreasing after 100 epochs, with four NVIDIA V100 GPUs.
Do you have any additional advice regarding the fine-tuning? In our paper, we initialized models with pre-trained Genesis Chest CT and fine-tuned every layer in target tasks. You could also freeze some layers and use the pre-trained model as a feature extractor, reliant on your target task dataset size and domain gap in transfer learning. You can find more tips at https://cs231n.github.io/transfer-learning/
Thank you, Zongwei
Hello Zongwei, first of all thank you for your work. It appears to be a great idea and it is very clearly explained. I was wondering if a version of your work will be developed for PET modality ? Do you have some advises to train a model for PET modality using your approach (especially for the patch size) ? I am working on a dataset with 128x128x(N slices) PET images. Do you think that patches of size 64x64x64 would help a network (VNet for example) to learn from 128x128x(N slices) PET images ? For how many epochs did you train your models ? (I think I saw 10 000 in one of your answers ?, before any early stop) ? I also want to use your VNet Chest-CT implementation. Do you have any additional advice regarding the fine-tuning ? (fine-tuning only the last layers, cubes of size 64 ?)
Thank you very much for your answers !