Open xwjBupt opened 5 years ago
(1) We compute the Cross Entropy loss for CLS0~CLS2, and then sum up them.
(2) The inputs of L1loss are DIV2 and the ground truth count map. Each element in count map denotes count number of the corresponding 64x64 local region, and we get count by summing up local regions in the density map.
(3) The batch size is fixed at 1. For each iteration, we just fetch one crop for training.
hi,thanks for your reply,but i am still confused,the cross entropy taks two as input,we need to calculate the loss between CLS0 and what ? CLS1 and what? CLS2 and what?
The cross entropy loss is computed between the predicted CLS_i and groud truth CLS_i.
@xhp-hust-2018-2011 why DIV2 is of size M/64 x N/64 instead of M/16 x N/16?
hi,thanks for sharing the code,but i have a few question, 1)what is the input of cross-entropy loss?one is the CLS0~CLS1 in div_res and the other one is the label_indice?but ,the CLS0,CLS1,CLS2 have the different shape ,how can the be computed for loss? 2)what is the input of L1 loss?when division time is 2,one is the DIV2 in merge_res,and what is the other one? 3)in data augmentation,you said crop 4 corners and randomly crop 5 patch,did you concat these 9 patch into a 93h*w tensor and send into the network? looking forward of your reply,thanks a lot!!