Welcome to the official implementation of ``Exploiting Scale-Variant Attention for Segmenting Small Medical Objects''. This repository offers a robust toolkit designed for advanced tasks in deep learning and computer vision, specifically tailored for semantic segmentation. It supports features such as training progress visualization, logging, and calculation of standard metrics.
Exploiting Scale-Variant Attention for Segmenting Small Medical Objects
Wei Dai, Rui Liu, Zixuan Wu, Tianyi Wu, Min Wang, Junxian Zhou, Yixuan Yuan, Jun Liu
Under review by a peer-reviewed journal, 2024. [arXiv]
To install the SvANet implementation, please follow the detailed instructions in INSTALL.md.
Please refer to DATA.md for guidelines on preparing the datasets for benchmarking and training.
To initiate the training and evaluation processes, utilize the configuration settings provided in the main.sh script. Prior to commencing the training, ensure that you have downloaded the pretrained model from torchvision.
### *Results for the Dataset for Only Ultra-small Objects*
### Ablation studies For detailed settings of the ablation study and additional experiments, refer to refer to the scripts [ablation.sh](shell/ablation.sh) and [ablation_extra.sh](shell/ablation_extra.sh). ## Inference To perform inference evaluation on various datasets, it is important to organize the data according to the guidelines provided in [DATA.md](readme/DATA.md). After completing the training process, refrain from making any modifications to the output folder. Instead, utilize [tinyObjectsValidation.py](tools/tinyObjectsValidation.py) for evaluation purposes. Please remember to update the "RootPath" variable in the script to match the location of your output folder. ## Citation If you use this implementation in your research, please consider citing our paper as follows: @misc{dai2024svanet, title={Exploiting Scale-Variant Attention for Segmenting Small Medical Objects}, author={Dai, Wei and Liu, Rui and Wu, Zixuan and Wu, Tianyi and Wang, Min and Zhou, Junxian and Yuan, Yixuan and Liu, Jun}, year={2024}, eprint={2407.07720}, archivePrefix={arXiv}, primaryClass={eess.IV}, url={https://arxiv.org/abs/2407.07720}, }