nabsabraham / focal-tversky-unet

This repo contains the code for our paper "A novel focal Tversky loss function and improved Attention U-Net for lesion segmentation" accepted at IEEE ISBI 2019.
357 stars 72 forks source link

Proper way to run inferrence using models? #9

Closed JakobKHAndersen closed 5 years ago

JakobKHAndersen commented 5 years ago

Hello

Firtst of all thanks for creating this repository. I have a question regarding the use of the of the multi-scale models for segmentation of test set images. I have succesfully trained the model and want to use on my test data. If i understand the model correctly, the output is a list containing 4 arrays of different resolutions (original input size and 3 down-scaled versions, yes?). Do i only use the sigmoid/softmax probability map of the original input sized output, or do i use all 4 and perform some sort of resizing with interpolation and average across in order to get to full benefit of the models?

BR.

nabsabraham commented 5 years ago

Hello! I just use the last preds which are similar size as the input [see here] but you could upsample the smaller scale predictions and do some sort of weighted average or fusion to get a better result. The line concerning preds_up after is just to resize the prediction to the original input image (because they were really large).