-
## Problem Description
After running the code, it was observed that the same `stay_id` could correspond to multiple `subject_id`s due to a lack of timely updates to the `stay_id`. For instance, if at…
-
your preprocessed mimic-cxr annotation contains impression and finding. However, other's setting usually only use finding section of a report, such as R2Gen model. And I run your model on annotations …
-
99% of papers use 512x512 or maximum 1024x1024 size images in chest x-ray dataset.
512x512 after resize would hardly be 6-7 GB after zipping.
it is criminal waste of resources to have people downl…
-
Reviewed Medical Image Datasets:
1. NIH Clinical Center:
Chest X-Ray Dataset (ChestX-ray8): Contains over 100,000 frontal-view X-ray images of 30,805 unique patients with 14
disease labels…
-
Hi,
I try to load the pre-trained models for inference:
CT_CLIP_zeroshot.pt,
CT_LiPro.pt
I run:
run_zero_shot.py with these parameters:
clip.load(r"C:\Users\pretrained_models\CT_CLIP_zeroshot…
-
Hi, I have been trying to download the dataset but I am not a credential user. Is there any way you could help me with it?
Thank You
egxsy updated
2 years ago
-
Sorry, I'm sorry to bother you.
I followed the instruction of build mimic-cxr-2.0.0-jpeg-txt.csv mentioned in [#issue-1800109179](https://github.com/Wang-Yuanlong/MultimodalPred/issues/1#issue-18001…
-
Hi,
Thank you very much for releasing the source code of your work. I noticed that you use CheXpert for multimodal pre-training of your model. However, as far as I'm aware, the CheXpert dataset doe…
-
Hello,
thank you for the great work and the great repo.
It takes about 50 hours per epoch for us to use your model training on the MIMIC-CXR dataset.
We used the same batch_size and num_worke…
-
Hello, I have some questions about the data set. I observed that the training set, test set and verification set in this dataset did not classify the four diseases according to the labels, so I separa…