open-mmlab / mmsegmentation

OpenMMLab Semantic Segmentation Toolbox and Benchmark.
https://mmsegmentation.readthedocs.io/en/main/
Apache License 2.0
8k stars 2.57k forks source link

How to repare the DRIVE data using the model in the unet folder #570

Closed jonguo111 closed 3 years ago

jonguo111 commented 3 years ago

If I am correct, the DRIVE data should be organized as follows in the mmsegmentation:

DRIVE |
|------------------images |----------------- | ------------training | -----------------| ------------validation |------------------annotations |----------------- | ------------training | -----------------| ------------validation

  1. May I ask what's the image format in the images folder? The RGB image or the gray scale image?
  2. In the annotations folder, if the following ground truth label is used as the annotation for the training?

vesell041_manual1

I cannot reproduce the results on both the RGB image and the gray scale image. Even I set the learning rate to be 0, the loss is still Nan. Please advise. Thank you very much.

Junjun2016 commented 3 years ago

Hi @jonguo111, Maybe,you can try the other three retinal vessel segmentation datasets or provide more log messages.

MengzhangLI commented 3 years ago

Hi, sorry for late reply.

A1: Yes, your folder organization is correct.

1. May I ask what's the image format in the images folder? The RGB image or the gray scale image?

A2: It is png file, RGB image just as dataset it provided. You could follow data_prepare_md for detailed information.

2. In the annotations folder, if the following ground truth label is used as the annotation for the training?

A3: Right now, for all medical dataset we took (a.k.a DRIVE, CHASE_DB1, STARE and HRF), the default setting is: validation images and their ground truth are used for validation. It means in training procedure, the model would contact validation set in validation process. Thus they are defined as validation set rather than test set.

I guess your problem is caused by no validation images/ground truth provided by their official sources. Something wrong happens when you processed your own DRIVE dataset.

If it still not works, please give me your e-mail and I will send you DRIVE dataset folder which could be implemented correctly by MMSegmentation.

Best,

MrWcy commented 1 year ago

Hi, sorry for late reply.

A1: Yes, your folder organization is correct.

1. May I ask what's the image format in the images folder? The RGB image or the gray scale image?

A2: It is png file, RGB image just as dataset it provided. You could follow data_prepare_md for detailed information.

2. In the annotations folder, if the following ground truth label is used as the annotation for the training?

A3: Right now, for all medical dataset we took (a.k.a DRIVE, CHASE_DB1, STARE and HRF), the default setting is: validation images and their ground truth are used for validation. It means in training procedure, the model would contact validation set in validation process. Thus they are defined as validation set rather than test set.

I guess your problem is caused by no validation images/ground truth provided by their official sources. Something wrong happens when you processed your own DRIVE dataset.

If it still not works, please give me your e-mail and I will send you DRIVE dataset folder which could be implemented correctly by MMSegmentation.

Best,

@MengzhangLI excuse me,Can you send the DRIVE dataset to my email【735746916@qq.com】? Thank you