ibrahimethemhamamci / CT-CLIP

Developing Generalist Foundation Models from a Multimodal Dataset for 3D Computed Tomography
200 stars 21 forks source link

Rad-Chest External Validation Specifics #24

Open surajpaib opened 4 months ago

surajpaib commented 4 months ago

Hi Team,

Thank you for releasing the CT-RATE dataset and open-sourcing your code and models. The community greatly appreciates it.

We would like to compare one of our Rad-Chest trained models (on the public train split) against CT-CLIP performance. Since we can only compare on the public test split of Rad-Chest, would it be possible to share a breakdown of prediction results across the Rad-Chest dataset so we can match IDs and compare?

Thanks in advance

sezginerr commented 3 months ago

Hi @surajpaib, thank you very much for your interest. I believe you mean the mapping between the RadChest volume names and the output CT-CLIP predictions. We can definitely provide them, but it would be great if you could wait until the updated training version is available (we plan to update them shortly). Regarding the comparison, we did not use any data from RadChest-CT and instead used all the data as external evaluation. There is a severe distribution shift between our data and their data. This makes it hard to compare a model that is directly trained on RadChest-CT itself, as the same distribution will always have a much higher score. I am not sure about your research question, but I thought you might want to know this.

surajpaib commented 3 months ago

Hi @sezginerr Fully agree that comparing an OOD evaluation with an ID evaluation is not fair. Our intention was to have a "baseline" of performance on the RadChest-CT apart from our own models (It seems like CT-CLIP is the only one that evaluated on the public split of RadChest-CT that I could find easily).

I am happy to wait for the updated trained version. Would you have ID samples for RadChest during training in the update?