anthonyweidai / SvANet

Official implementation of "Exploiting Scale-Variant Attention for Segmenting Small Medical Objects"
MIT License
37 stars 2 forks source link
attention-mechanism computer-vision deep-learning medical-image-analysis medical-image-segmentation medical-imaging multi-scale-features small-object-detection small-object-segmentation vision-transformer

Exploiting Scale-Variant Attention for Segmenting Small Medical Objects

Welcome to the official implementation of ``Exploiting Scale-Variant Attention for Segmenting Small Medical Objects''. This repository offers a robust toolkit designed for advanced tasks in deep learning and computer vision, specifically tailored for semantic segmentation. It supports features such as training progress visualization, logging, and calculation of standard metrics.

Exploiting Scale-Variant Attention for Segmenting Small Medical Objects
Wei Dai, Rui Liu, Zixuan Wu, Tianyi Wu, Min Wang, Junxian Zhou, Yixuan Yuan, Jun Liu
Under review by a peer-reviewed journal, 2024. [arXiv]

Installation

To install the SvANet implementation, please follow the detailed instructions in INSTALL.md.

Benchmark and Evaluation

Please refer to DATA.md for guidelines on preparing the datasets for benchmarking and training.

To initiate the training and evaluation processes, utilize the configuration settings provided in the main.sh script. Prior to commencing the training, ensure that you have downloaded the pretrained model from torchvision.

Results for Datasets with Diverse Object Sizes

### *Results for the Dataset for Only Ultra-small Objects*

### Ablation studies For detailed settings of the ablation study and additional experiments, refer to refer to the scripts [ablation.sh](shell/ablation.sh) and [ablation_extra.sh](shell/ablation_extra.sh). ## Inference To perform inference evaluation on various datasets, it is important to organize the data according to the guidelines provided in [DATA.md](readme/DATA.md). After completing the training process, refrain from making any modifications to the output folder. Instead, utilize [tinyObjectsValidation.py](tools/tinyObjectsValidation.py) for evaluation purposes. Please remember to update the "RootPath" variable in the script to match the location of your output folder. ## Citation If you use this implementation in your research, please consider citing our paper as follows: @misc{dai2024svanet, title={Exploiting Scale-Variant Attention for Segmenting Small Medical Objects}, author={Dai, Wei and Liu, Rui and Wu, Zixuan and Wu, Tianyi and Wang, Min and Zhou, Junxian and Yuan, Yixuan and Liu, Jun}, year={2024}, eprint={2407.07720}, archivePrefix={arXiv}, primaryClass={eess.IV}, url={https://arxiv.org/abs/2407.07720}, }