Closed kare305 closed 4 months ago
Hi, many thanks for your attention to our work! No problem about sharing the pre-trained checkpoints, I will release it after double-checking. By the way, have you downloaded the CC-CCII dataset? Do I need to share it?
Hi, many thanks for your attention to our work! No problem about sharing the pre-trained checkpoints, I will release it after double-checking. By the way, have you downloaded the CC-CCII dataset? Do I need to share it?
Thank you very much, I have downloaded the CC-CCII dataset. Looking forward to your sharing of pre-trained weights.
Dear kare, recently we found that the simplest 3d-unet is better than swin-unetr in the cc-ccii dataset. If you are only interested in this dataset instead of pre-training, I suggest that 3d-unet is a better choice for cc-ccii. I provide the pre-trained checkpoints and logs here (https://www.dropbox.com/scl/fo/okgg7unv4iazhi13wj1lp/AG8FVkQmnN9svvIaZUVvA9I?rlkey=angfy2wh3q8wtscgbf8pmmhp9&st=aq94torh&dl=0). If you prefer swin-unetr, I will soon release it too.
Dear kare, recently we found that the simplest 3d-unet is better than swin-unetr in the cc-ccii dataset. If you are only interested in this dataset instead of pre-training, I suggest that 3d-unet is a better choice for cc-ccii. I provide the pre-trained checkpoints and logs here (https://www.dropbox.com/scl/fo/okgg7unv4iazhi13wj1lp/AG8FVkQmnN9svvIaZUVvA9I?rlkey=angfy2wh3q8wtscgbf8pmmhp9&st=aq94torh&dl=0). If you prefer swin-unetr, I will soon release it too.
Wow, your response speed is impressive! Thank you for your valuable advice. I plan to pre-train a model on a large-scale public pneumonia dataset, such as CC-CCII, and then fine-tune it on my own pneumonia classification task dataset. I will first try the weights and models you provided. Thank you again for your prompt reply.
Hi, the training logs and swinunetr checkpoints of CC-CCII (with pre-training) are updated in https://www.dropbox.com/scl/fo/svqaxxdpb75fo297ulljt/AMoWwRFKzTtxd4kVS2ZK03Q?rlkey=f1juzrxenx1qmq1sqq7nwaywg&st=39oqhkls&dl=0. And I have revised the CC-CCII training codes https://github.com/Luffy03/VoCo/tree/main/Finetune.
Hi, the training logs and swinunetr checkpoints of CC-CCII (with pre-training) are updated in https://www.dropbox.com/scl/fo/svqaxxdpb75fo297ulljt/AMoWwRFKzTtxd4kVS2ZK03Q?rlkey=f1juzrxenx1qmq1sqq7nwaywg&st=39oqhkls&dl=0. And I have revised the CC-CCII training codes https://github.com/Luffy03/VoCo/tree/main/Finetune.
Okay, thank you very much for your reply. I am actually confused about the methods used in this paper. Is it necessary to directly resize each image and input it into the network (load backbone weights) for classification tasks in downstream tasks, or do we need to perform base crop operations according to the VOCO framework and then train based on these crops?
The former is right. For all finetuning tasks, we don't need the base crop operation, it is designed for pre-training.
The former is right. For all finetuning tasks, we don't need the base crop operation, it is designed for pre-training.
Thank you for your answer, but if we follow the former approach, wouldn't it only focus on local image features, which can represent the entire CT image? Especially in downstream segmentation tasks, how is it operated?
Hi please read our codes first. base crops for pretrain random crops for finetune
Dear researchers, our work is now available at Large-Scale-Medical, if you are still interested in this topic. Thank you very much for your attention to our work, it does encourage me a lot!
Hello, congratulations on completing the exciting work! Regarding pre training weights, I would like to know if it is possible to provide weights for downstream tasks related to the CC-CCII dataset classification task?