uni-medical / GMAI-VL

GMAI-VL & GMAI-VL-5.5M: A Large Vision-Language Model and A Comprehensive Multimodal Dataset Towards General Medical AI.
38 stars 0 forks source link

GMAI-VL & GMAI-VL-5.5M: A General Medical Vision-Language Model and Multimodal Dataset

Welcome to the GMAI-VL code repository, which accompanies the paper "GMAI-VL & GMAI-VL-5.5M: A General Medical Vision-Language Model and Multimodal Dataset." This repository provides the resources needed for reproducing the results and furthering research in medical AI through vision-language models. This repository includes:

🚧 Coming Soon: Code, Dataset, and Model Weights 🚧

We are currently organizing and preparing the following resources for public release:

Stay tuned for upcoming updates!

πŸ“… Release Timeline

πŸ”— Stay Connected

For inquiries, collaboration opportunities, or access requests, feel free to reach out via email or open a GitHub issue.

Thank you for your interest and support!

πŸ“„ Paper and Citation

Our paper has been published on arXiv. If you use our work in your research, please consider citing us:

BibTeX Citation


@article{li2024gmai,
      title={GMAI-VL & GMAI-VL-5.5M: A Large Vision-Language Model and A Comprehensive Multimodal Dataset Towards General Medical AI},
      author={Tianbin Li, Yanzhou Su, Wei Li, Bin Fu, Zhe Chen, Ziyan Huang, Guoan Wang, Chenglong Ma, Ying Chen, Ming Hu, Yanjun Li, Pengcheng Chen, Xiaowei Hu, Zhongying Deng, Yuanfeng Ji, Jin Ye, Yu Qiao, Junjun He},
  journal={arXiv preprint arXiv:2411.14522},
  year={2024}
}