Jielin-Qiu / Transfer_Knowledge_from_Language_to_ECG

[EACL 2023] Transfer Knowledge from Natural Language to Electrocardiography: Can We Detect Cardiovascular Disease Through Language Models?
19 stars 0 forks source link

Is this actually a supervised training for disease classification? #1

Closed cherise215 closed 1 year ago

cherise215 commented 1 year ago

First of all, great work! But I am a bit confused when reading your code. It seems like, during training, you need to compute two losses: the OT loss and a supervised loss for the disease classification task. code: https://github.com/Jason-Qiu/Transfer_Knowledge_from_Language_to_Electrocardiography/blob/525266e5f35a016c309fe07e3fe9da17c66144ae/utils.py#L26

If this is the case, why do you report the result as zero-shot cardiovascular disease detection in the paper? Can you explain it in more detail?

Jielin-Qiu commented 1 year ago

Thanks for your interest in our work! Because we finetune the LLM for a non-classification objective, the finetuning only aims at outputting better textual descriptions of the input ECG signals. We call it zero-shot because the model is not trained under a supervised classification task, where the embeddings are not finetuned for disease detection. After finetuning, we can directly use the transformed embeddings in the classification task, but the model actually did not learn any disease detection-specific knowledge in the finetuning step, so we call it zero-shot here. Please let us know if there are any other questions :)

cherise215 commented 1 year ago

Thanks for the reply. Yet It is still not clear to me. Maybe there is something I missed in the paper? I didn't find much information about the fine-tuning stage. Do you mean you use the classification label for the pre-training stage of the ECG encoder and then finetune the LLM without classification labels? I am not sure this is a serious zero-shot setting. I am also confused about the fine-tuning stage, how do you stabilize the training as the ground-truth report embedding will also change? Forgive me if I am asking stupid questions.

Jielin-Qiu commented 1 year ago

The classification labels are not used in the training phase. For more details, please see our updated version here: https://aclanthology.org/2023.findings-eacl.33.pdf (Section 3 Problem Formulation, Model Architecture, Downstream Applications etc). Thanks.

cherise215 commented 1 year ago

In your paper, you said "in addition to the traditional cross-entropy loss. " Isn't that training loss include the classification labels during computation? I don't really get it.