RyanWangZf / MedCLIP

EMNLP'22 | MedCLIP: Contrastive Learning from Unpaired Medical Images and Texts
436 stars 49 forks source link

The model weights do not seem to be loaded correctly, and the classification accuracy of the pre-trained models in cheXpert varies greatly, with the average F1 of 0.42 #43

Closed XNLHZ closed 5 months ago

XNLHZ commented 5 months ago

Using the 5-category demo in the example, setting the logit threshold of 0.50, and using 500 frontal X-ray images from the cheXpert public test set for testing, the following test results are as follows: image In addition, the classification results of the sample images do not match the displayed results: image The results should be: {'logits': tensor([[0.5154, 0.4119, 0.2831, 0.2441, 0.4588]]

XNLHZ commented 5 months ago

Has anyone had the same issue,thanks. Very good work,I'd like to use it to do some follow-up work on the generation of radiology image reports.