This code illustrate how to segment the ROI in cervical images using U-NET.
The ROI here meant to include the: Os + transformation zone + nearby tissue.
The localized ROI is supposed to improve the classification of cervical types, which is the challenge in the Kaggle competition:Intel and MobileODT Cervical Cancer Screening
Compare to other UNET examples, in this one we got:
Dependencies:
Other references:
Data preparation:
trian.7z
and test.7z
into input folder. You may unzip additional_Type_*_v2.7z
as well, if you want to segment them, its optional.prepare_data.py
split_data.py
input/*.json
.Training:
train.py
src/unet_xxxxxx/weights.h5
. Note when train.py
starts, it will look for previous weight file (if any) and resume from there if weight file exitsSegmentation:
predict.py
Configurations:
configurations.py
On a GTX 1070, the training of 400 epochs took ~2 hours to complete. The best DICE coefficient is ~0.78.
Apply this model to the 512 unseen test images, the result looks satisfactory in 96% of images.
Sample outputs:
Training loss: