EMNLP'22 | MedCLIP: Contrastive Learning from Unpaired Medical Images and Texts
436
stars
49
forks
source link
The model weights do not seem to be loaded correctly, and the classification accuracy of the pre-trained models in cheXpert varies greatly, with the average F1 of 0.42 #43
Using the 5-category demo in the example, setting the logit threshold of 0.50, and using 500 frontal X-ray images from the cheXpert public test set for testing, the following test results are as follows:
In addition, the classification results of the sample images do not match the displayed results:
The results should be: {'logits': tensor([[0.5154, 0.4119, 0.2831, 0.2441, 0.4588]]
Using the 5-category demo in the example, setting the logit threshold of 0.50, and using 500 frontal X-ray images from the cheXpert public test set for testing, the following test results are as follows: In addition, the classification results of the sample images do not match the displayed results: The results should be: {'logits': tensor([[0.5154, 0.4119, 0.2831, 0.2441, 0.4588]]