taozh2017 / CFANet

37 stars 0 forks source link

Cross-level Feature Aggregation Network for Polyp Segmentation

Authors: Tao Zhou, Yi Zhou, Kelei He, Chen Gong, Jian Yang, Huazhu Fu, and Dinggang Shen.

1. Preface

1.1. :fire: NEWS :fire:

2.1. Table of Contents

Table of contents generated with markdown-toc

2. Overview

2.1. Introduction

Accurate segmentation of polyps from colonoscopy images plays a critical role in the diagnosis and cure of colorectal cancer. Although effectiveness has been achieved in the field of polyp segmentation, there are still several challenges. Polyps often have a diversity of size and shape and have no sharp boundary between polyps and their surrounding. To address these challenges, we propose a novel Cross-level Feature Aggregation Network (CFA-Net) for polyp segmentation. Specifically, we first propose a boundary prediction network to generate boundary-aware features, which are incorporated into the segmentation network using a layer-wise strategy. In particular, we design a two-stream structure based segmentation network, to exploit hierarchical semantic information from cross-level features. Furthermore, a Cross-level Feature Fusion (CFF) module is proposed to integrate the adjacent features from different levels, which can characterize the cross-level and multi-scale information to handle scale variations of polyps. Further, a Boundary Aggregated Module (BAM) is proposed to incorporate boundary information into the segmentation network, which enhances these hierarchical features to generate finer segmentation maps. Quantitative and qualitative experiments on five public datasets demonstrate the effectiveness of our CFA-Net against other state-of-the-art polyp segmentation methods

2.2. Framework Overview


Figure 1: Overview of the proposed CFANet.

2.3. Qualitative Results


Figure 2: Qualitative Results.

3. Proposed Baseline

3.1. Training/Testing

The training and testing experiments are conducted using PyTorch with a single NVIDIA Tesla P40 with 24 GB Memory.

Note that our model also supports low memory GPU, which means you can lower the batch size

  1. Configuring your environment (Prerequisites):

    Note that CFANet is only tested on Ubuntu OS with the following environments. It may work on other operating systems as well but we do not guarantee that it will.

    • Creating a virtual environment in terminal: conda create -n CFANet python=3.6.

    • Installing necessary packages: PyTorch 1.1

  2. Downloading necessary data:

    • downloading testing dataset and move it into ./data/TestDataset/, which can be found in this download link (Google Drive). It contains five sub-datsets: CVC-300 (60 test samples), CVC-ClinicDB (62 test samples), CVC-ColonDB (380 test samples), ETIS-LaribPolypDB (196 test samples), Kvasir (100 test samples).

    • downloading training dataset and move it into ./data/TrainDataset/, which can be found in this download link (Google Drive). It contains two sub-datasets: Kvasir-SEG (900 train samples) and CVC-ClinicDB (550 train samples).

    • downloading pretrained weights and move it into checkpoint/CFANet.pth, which can be found in this download link (Google Drive).

    • downloading Res2Net weights and and move it into ./lib/, which can be found in this download link (Google Drive).

  3. Training Configuration:

    • Assigning your costumed path, like --save_model and --train_path in train.py.

    • Just enjoy it!

  4. Testing Configuration:

    • After you download all the pre-trained model and testing dataset, just run test.py to generate the final prediction map: replace your trained model directory (--pth_path).

    • Just enjoy it!

3.2 Evaluating your trained model:

Matlab: One-key evaluation is written in MATLAB code (link), please follow this the instructions in ./eval/main.m and just run it to generate the evaluation results in ./res/. The complete evaluation toolbox (including data, map, eval code, and res): new link.

3.3 Pre-computed maps:

They can be found in download link.

4. MindSpore

You need to run cd mindspore first.

  1. Environment Configuration:

    • MindSpore: 2.0.0-alpha

    • Python: 3.8.0

  2. Training Configuration:

    • Assigning your costumed path, like --save_model , --train_img_dir and so on in train.py.

    • Just enjoy it!

  3. Testing Configuration:

    • After you download all the pre-trained model and testing dataset, just run test.py to generate the final prediction map: replace your trained model directory (--pth_path).

    • Just enjoy it!

5. Citation

Please cite our paper if you find the work useful:

@article{zhou2023cross,
  title={Cross-level Feature Aggregation Network for Polyp Segmentation},
  author={Zhou, Tao and Zhou, Yi and He, Kelei and Gong, Chen and Yang, Jian and Fu, Huazhu and Shen, Dinggang},
  journal={Pattern Recognition},
  volume={140},
  pages={109555},
  year={2023},
  publisher={Elsevier}
}

6. License

The source code is free for research and education use only. Any comercial use should get formal permission first.


⬆ back to top