Event-AHU / Medical_Image_Analysis

Foundation models based medical image analysis
35 stars 0 forks source link
context-sample-retrieval foundation-model large-language-model llm mae mamba masked-auto-encoder medical-image-analysis medical-report-generation pre-training state-space-model vision-language-model x-ray-image

Update Log:

Surveys/Reviews

Projects Maintained in This Github:

:dart: [Context Sample Retrieval for LLM based X-ray Report Generation]()

Inspired by the tremendous success of Large Language Models (LLMs), existing X-ray medical report generation methods attempt to leverage large models to achieve better performance. They usually adopt a Transformer to extract the visual features of a given X-ray image, and then, feed them into the LLM for text generation. How to extract more effective information for the LLMs to help them improve final results is an urgent problem that needs to be solved. Additionally, the use of visual Transformer models also brings high computational complexity. To address these issues, this paper proposes a novel context-guided efficient X-ray medical report generation framework. Specifically, we introduce the Mamba as the vision backbone with linear complexity, and the performance obtained is comparable to that of the strong Transformer model. More importantly, we perform context retrieval from the training set for samples within each mini-batch during the training phase, utilizing both positively and negatively related samples to enhance feature representation and discriminative learning. Subsequently, we feed the vision tokens, context information, and prompt statements to invoke the LLM for generating high-quality medical reports. Extensive experiments on three X-ray report generation datasets (i.e., IU-Xray, MIMIC-CXR, CheXpert Plus) fully validated the effectiveness of our proposed model.

R2GenCSR

:dart: [Pre-training MAE Model on HD X-ray Images]()

Existing X-ray based pre-trained vision models are usually conducted on a relatively small-scale dataset (less than 500k samples) with limited resolution (e.g., 224 × 224). However, the key to the success of self-supervised pre-training large models lies in massive training data, and maintaining high resolution in the field of X-ray images is the guarantee of effective solutions to difficult miscellaneous diseases. In this paper, we address these issues by proposing the first high-definition (1280 × 1280) X-ray based pre-trained foundation vision model on our newly collected large-scale dataset which contains more than 1 million X-ray images. Our model follows the masked auto-encoder framework which takes the tokens after mask processing (with a high rate) is used as input, and the masked image patches are reconstructed by the Transformer encoder-decoder network. More importantly, we introduce a novel context-aware masking strategy that utilizes the chest contour as a boundary for adaptive masking operations. We validate the effectiveness of our model on two downstream tasks, including X-ray report generation and disease recognition. Extensive experiments demonstrate that our pre-trained medical foundation vision model achieves comparable or even new state-of-the-art performance on downstream benchmark datasets.

HDXrayPretrain

Paper Lists:

Suggested Code:

:newspaper: Citation

If you find this work helps your research, please star this GitHub and cite the following papers:

@misc{wang2024R2GenCSR,
      title={R2GenCSR: Retrieving Context Samples for Large Language Model based X-ray Medical Report Generation}, 
      author={Xiao Wang and Yuehang Li and Fuling Wang and Shiao Wang and Chuanfu Li and Bo Jiang},
      year={2024},
      eprint={2408.09743},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2408.09743}, 
}

@misc{wang2024pretraininghighdefinitionxray,
      title={Pre-training on High Definition X-ray Images: An Experimental Study}, 
      author={Xiao Wang and Yuehang Li and Wentao Wu and Jiandong Jin and Yao Rong and Bo Jiang and Chuanfu Li and Jin Tang},
      year={2024},
      eprint={2404.17926},
      archivePrefix={arXiv},
      primaryClass={eess.IV},
      url={https://arxiv.org/abs/2404.17926}, 
}

If you have any questions about these works, please feel free to leave an issue.

Star History

Star History Chart