Closed Younger330 closed 1 year ago
Hi,
In our approach, we utilize a deep supervision strategy to generate multi-scale segmentation maps. The channel number for these maps is set to match the maximum number of classes across all tasks. During each iteration, we crop the output segmentation map along the channel dimension to ensure that its channel count aligns with that of the corresponding label. This operation is implemented in line 201 of the 'UniSeg_Trainer.py' file.
Best regards, Yiwen Ye
Thank you again. This configuration is very intriguing as it allows for rapid adaptation to downstream tasks. Just to clarify, am I correct in understanding that the data within a batch should originate from the same dataset?
Yes, In this work, we have implemented a design constraint whereby all data within a given batch are sourced from the same dataset. This is done to circumvent some potential issues, such as inconsistencies in channel numbers across different datasets within the same batch.
It's truly helpful,and have you implement ablation study for this implementation?
We haven't conducted experiments to test the impact of mixing datasets within a single batch. However, in my opinion, because our model doesn't include any design elements for interaction between inputs within a batch, such a design modification is unlikely to add explicit value.
Your paper has been very helpful for our work, and when I read the code in UniSeg_trainer.py, I want to know how to set self.num_classes. Since many datasets have different classes, I noticed that you only set the num_classes for the output module seg_outputs.
self.seg_outputs.append(conv_op(self.conv_blocks_localization[ds][-1].output_channels, num_classes, 1, 1, 0, 1, 1, seg_output_use_bias))
Could you please explain how it works when dealing with datasets with varying numbers of classes?