Our paper utilizes recent vision-language models to produce diverse and realistic synthetic echocardiography image data, preserving key features of the original images guided by textual and semantic label maps. Specifically, we investigate three potential avenues: unconditional generation, generation guided by text, and a hybrid approach incorporating both textual and semantic supervision. We show that the rich contextual information present in the synthesized data potentially enhances the accuracy and interpretability of downstream tasks, such as echocardiography segmentation and classification with improved metrics and faster convergence.
Pooria Ashrafian, Milad Yazdani, Moein Heidari, Dena Shahriari, Ilker Hacihaliloglu
19.04.2024
| Code is released!
28.03.2024
| The paper is now available on Arxiv.! 🥳
@misc{ashrafian2024visionlanguage,
title={Vision-Language Synthetic Data Enhances Echocardiography Downstream Tasks},
author={Pooria Ashrafian and Milad Yazdani and Moein Heidari and Dena Shahriari and Ilker Hacihaliloglu},
year={2024},
eprint={2403.19880},
archivePrefix={arXiv},
primaryClass={eess.IV}
}