DengPingFan / PraNet

PraNet: Parallel Reverse Attention Network for Polyp Segmentation, MICCAI 2020 (Oral). Code using Jittor Framework is available.
http://dpfan.net/PraNet
448 stars 119 forks source link

训练集和测试集 #55

Closed Huster-Hq closed 1 year ago

Huster-Hq commented 1 year ago

你们在文章里说训练集、测试集、验证集是按比例随机分的。请问随机划分是否欠妥,因为有些数据集中,同一个息肉往往有很多张图片,随机划分的话训练集和测试集会出现同一个息肉。

GewelsJI commented 1 year ago

Hi @Huster-Hq

A nice question. We also notice this problem during our research project.

But we opt to follow the same split manner from previous work, ResUNet++. Because we have no medical experience identifying the same poly in different locations, such as ascending, transverse, and descending colons. Or, if you are a medical specialist or student, you can take a detailed check on the existing dataset split provided by our link. Then, the numerical statistics about the domain overlap between the train and test sets would be clear for us.

By the way, we create a new largest-scale benchmark for video polyp segmentation, which address this issue and take a clear split without any overlap.

I hope to receive your further feedback here.

Best, Ge-Peng.

Huster-Hq commented 1 year ago

好的感谢您的回复,我还有一些问题想请教一下您:

  1. 在训练时,你们有采用加载预训练权重吗?
  2. 在测试泛化性能时,你们的训练集是什么?是CVC-612的550张训练数据加上Kvasir的900张图吗?
Huster-Hq commented 1 year ago

您在做这个实验的时候,训练集一直都是CVC+Kvasir吗?还是测试CVC的时候,只用了CVC的550张做训练? image

GewelsJI commented 1 year ago

好的感谢您的回复,我还有一些问题想请教一下您:

  1. 在训练时,你们有采用加载预训练权重吗?
  2. 在测试泛化性能时,你们的训练集是什么?是CVC-612的550张训练数据加上Kvasir的900张图吗?

Hi, @Huster-Hq

As for Q1, we use the imagenet pretrained weights provided by Res2Net. As for Q2 and Q3, we train a unified model using all training samples and then test it in all numerical experiments. You can find more details in our paper.

Thank you again.

Best Ge-Peng.

Huster-Hq commented 1 year ago

非常感谢您的回复,已经解决疑问。请问性能验证脚本是否有python版本?

GewelsJI commented 1 year ago

i think this toolbox maybe help you a lot (https://github.com/GewelsJI/VPS/tree/main/eval)