I found a bug in 'val.py', which caused abnormal segmentation result ( a poor dice) in some case (eg. volume-54.nii). The number of its slices was divisible by 48 (the 3d patch's depth size),when it was split into test dataset, the value of variable 'count' will be equal 0,but such code
{ pred_seg = np.concatenate([pred_seg, outputs_list[-1][-count:]], axis=0) }
will concentrate the last block (entire 3d 256×256×48 patches) on the segmentation result,which should been only used the last several slices (the number of slices which should be used = count)
--------------------original code -----------------------------
当无法整除的时候反向取最后一个block
if end_slice is not ct_array.shape[0] - 1:
flag = True
count = ct_array.shape[0] - start_slice
ct_array_list.append(ct_array[-size:, :, :]
if end_slice is not ct_array.shape[0] - 1:
count = ct_array.shape[0] - start_slice
if count != 0:
flag = True
ct_array_list.append(ct_array[-size:, :, :])
I found a bug in 'val.py', which caused abnormal segmentation result ( a poor dice) in some case (eg. volume-54.nii). The number of its slices was divisible by 48 (the 3d patch's depth size),when it was split into test dataset, the value of variable 'count' will be equal 0,but such code { pred_seg = np.concatenate([pred_seg, outputs_list[-1][-count:]], axis=0) } will concentrate the last block (entire 3d 256×256×48 patches) on the segmentation result,which should been only used the last several slices (the number of slices which should be used = count) --------------------original code -----------------------------
当无法整除的时候反向取最后一个block
--------------------revised code--------------------------------
当无法整除的时候反向取最后一个block