Closed JiaxinZhuang closed 9 months ago
At last, we use 96,96,96 for training. The log was in NVIDIA server. I‘’m contacting them to try to get the log.
Sure. I find it's hard to rearch a comparable performance on Task10 when I tried unet, swin-unetr and unet on 5-fold, even I tried to use the same setting(augementation, hyper-parameters) as the CLIP Driven codes. Is it anything that I miss? Actually , the code from monai works fine on other Tasks.
Since the training file and log is too large (roughly 30T), they have been deleted. We only store one of the best model weight. As for task 10, it was indeed hard to finetune. We increase the batch-size (not the default parameter) to obtain that results. You could have a try. Feel free to ask any further questions.
For MSD datasets, such as Task10, it seems that the crop size would be [96,96,96]. However, I also noticed that the annotations in the https://github.com/ljwztc/CLIP-Driven-Universal-Model/blob/c8e829eee7769fbc3120b9fe7687bb73402dfc87/dataset/dataloader.py#L260C79-L260C81 is 192,192,64. So which is correct size for feeding the network. By the way, can you provide some logs containg the hyper-parameters for training or more details about hyper-paramerters for training, especially for the MSD Task10,6,7. Thank you.