mk-runner / FSE

Factual Serialization Enhancement: A Key Innovation for Chest X-ray Report Generation
https://arxiv.org/abs/2405.09586
1 stars 0 forks source link
chest-x-ray factual-serialization-enhancement iu-x-ray mimic-cxr radiology-report-generation similar-historical-cases

FSE

Factual Serialization Enhancement: A Key Innovation for Chest X-ray Report Generation.

Citations

If you use or extend our work, please cite our paper at ***.

@misc{liu2024factual,
      title={Factual Serialization Enhancement: A Key Innovation for Chest X-ray Report Generation}, 
      author={Kang Liu and Zhuoqi Ma and Mengmeng Liu and Zhicheng Jiao and Xiaolu Kang and Qiguang Miao and Kun Xie},
      year={2024},
      eprint={2405.09586},
      archivePrefix={arXiv},
      primaryClass={eess.IV}
}

Requirements

Checkpoints

You can download checkpoints of FSE as follows:

Datasets

We use two datasets (IU X-Ray and MIMIC-CXR) in our paper.

NOTE: The IU X-Ray dataset is of small size, and thus the variance of the results is large.

Reproducibility on MIMIC-CXR

Extracting factual serialization using structural entities approach

  1. Config RadGraph environment based on knowledge_encoder/factual_serialization.py ===================environmental setting=================

    Basic Setup (One-time activity)

    a. Clone the DYGIE++ repository from here. This repository is managed by Wadden et al., authors of the paper Entity, Relation, and Event Extraction with Contextualized Span Representations.

    git clone https://github.com/dwadden/dygiepp.git

    b. Navigate to the root of repo in your system and use the following commands to set the conda environment:

    conda create --name dygiepp python=3.7
    conda activate dygiepp
    cd dygiepp
    pip install -r requirements.txt
    conda develop .   # Adds DyGIE to your PYTHONPATH

    c. Activate the conda environment:

    conda activate dygiepp
  2. Config radgraph_model_path and ann_path in knowledge_encoder/factual_serialization.py. The former can be downloaded from here, and the latter, annotation.json, can be obtained from here. Note that you can apply with your license of PhysioNet.
  3. Set the local path in config/finetune_config.yaml for images and checkpoints, such as mimic_cxr_image_dir and chexbert_model_checkpoint
  4. Run the knowledge_encoder/factual_serialization.py to extract factual serialization for each sample.

Notably,scibert_scivocab_uncased can be downloaded from here. To calculate the NLG and CE metrics, you should download these checkpoints. chexbert.pth can be downloaded from here. distilbert-base-uncased can be downloaded from here. bert-base-uncased can be downloaded from here. radgraph can be downloaded from here. .

Conducting the first stage (i.e., training cross-modal alignment module)

Run bash pretrain_mimic_cxr.sh to pretrain a model on the MIMIC-CXR data.

Similar historical cases for each sample

  1. Config --load argument in pretrain_inference_mimic_cxr.sh
  2. Run bash pretrain_inference_mimic_cxr.sh to retrieve similar historical cases for each sample, forming mimic_cxr_annotation_sen_best_reports_keywords_20.json.

Conducting the second stage (i.e., training report generation module)

  1. Config --load argument in finetune_mimic_cxr.sh
  2. Run bash finetune_mimic_cxr.sh to generate reports based on similar historical cases.

Test

  1. You must download the medical images, their corresponding reports (i.e., mimic_cxr_annotation_sen_best_reports_keywords_20.json), and checkpoints (i.e., finetune_model_best.pth) in Section Datasets and Section Checkpoints, respectively.

  2. Config --load and --mimic_cxr_ann_patharguments in test_mimic_cxr.sh

  3. Run bash test_mimic_cxr.sh to generate reports based on similar historical cases.

  4. Results (i.e., FSE-5, $M_{gt}=100$) on MIMIC-CXR are presented as follows:

Reproducibility on IU X-ray

Results (i.e., FSE-20, $M_{gt}=60$) on IU-Xray are presented as follows:

Acknowledgement

References

[1] Chen, Z., Song, Y., Chang, T.H., Wan, X., 2020. Generating radiology reports via memory-driven transformer, in: EMNLP, pp. 1439–1449.

[2] Chen, Z., Shen, Y., Song, Y., Wan, X., 2021. Cross-modal memory networks for radiology report generation, in: ACL, pp. 5904–5914.

[3] Wang, F., Zhou, Y., Wang, S., Vardhanabhuti, V., Yu, L., 2022. Multigranularity cross-modal alignment for generalized medical visual representation learning, in: NeurIPS, pp. 33536–33549.