Open Eldo-rado opened 1 year ago
Hi, thanks for your interest in our work and sorry for the late reply!
Thanks for your reply! I would also like to confirm that: Is text-based MLC trained with the whole network in Fig. 8? (The multi-label classification performance based on the updated feature $f_g^{kv}$ can also be effective enhanced.) But why is it called a pretraining task?
Hi 👋, thanks for your great job! And I have some questions about the Text-based MLC to confirm.
class pretrain_dataset
), I found that all the data of MIMIC-CXR was used in the pre-training of multi-label classification. Will this not cause information leakage in the subsequent downstream tasks? Thank you in advance. I am looking forward to hearing from you!