Zyun-Y / DconnNet

Codes for CVPR2023 paper "Directional Connectivity-based Segmentation of Medical Images"
131 stars 7 forks source link

help! about the the input for training mask #20

Closed GeroRoman closed 7 months ago

GeroRoman commented 8 months ago

Thank you for your code and work. I copied your model into my training code,but an error occurred during my training. connect_loss.py about Line 200: class_pred = c_map.view([c_map.shape[0],1, 8, c_map.shape[2],c_map.shape[3]]) RuntimeError: shape '[1, 1, 8, 256, 256]' is invalid for input of size 65536. My dataset‘s image annotation's shape is 1, 1, 256, 256. But it needs to be forcibly transformed into 1, 8, 256, 256. So I would like to ask about the shape of the input annotation and how to set the annotation. By the way, the output of the DconnNet.py is (return cls_pred,mapped_c5), i know cls_pred is the prediction of the model, but what is the meaning of the mapped_c5 and is mapped_c5 useful in the subsequent training and tests?

Zyun-Y commented 8 months ago

For your first question, yes the model automatically predicts the connectivity map instead of the segmentation map so the output will be 8-channel. For your data and annotation, there is no extra work for you to do. Just use the (1, 1, 256, 256) mask as your training label, and in the loss function it will automatically generate a connectivity mask for you based on your annotation mask as follows:

con_target = connectivity_matrix(onehotmask,self.args.num_class) #(B, 8*C, H, W)

For your second question, mapped_c5 is used for training the directional prior as in paper and was not used in the testing process.

I hope this helps.