Closed tianranzhang closed 3 years ago
These are not publicly available as of now. Basically I don't think the radiologist labels are good enough to benchmark ML methods against. We only had one annotator and there probably needs to be 3+ in order to have a consistent set.
EDIT: Regarding ChexBert, the intention in sharing the labels with them was to obtain more radiologist annotations for a more robust ground truth, but I guess they ran out of time.
Hi! Thanks for your great work facilitating reproducible research! In the MIMIC-CXR-JPG paper you mentioned using 687 manually annotated reports to evaluate performances of the two automatic labelers. In the ChexBert paper (https://arxiv.org/pdf/2004.09167.pdf) they also mentioned about using the 687 expert-labeled reports as an evaluation set.
Is it possible to release the study id and manual annotations for these 687 reports, so that new labeling methods can be fairly compared against existing techniques? Thank you!