Closed hwei-hw closed 6 years ago
Hello,
The previous version of the code required passing of the binarized (one-hot encoded) ground truth for the Dice Loss, whereas the new version takes in the Ground Truth as [1,..,NumClass-1], and one-hot encode them within the function. This saves some RAM space when dealing with large number of classes. Functionally, both should perform the same.
Best Abhijit
Thanks for your reply ! I haven met some error when run the new version code. Is there some error in the input data and corresponding labels?
I made some changes to the DataLoader file. Not sure if you are using the latest version. I do not think the error is in the input data.
Thanks very a lot ! I will check these possible error result from the changes in the new version code.
Hi @Atomwh and @abhi4ssj , im getting the same error RuntimeError: invalid argument 3: Index tensor must have same dimensions as output tensor at /pytorch/aten/src/THC/generic/THCTensorScatterGather.cu:289
The shape of my data is: Tr_Dat = np.zeros((16, 1, 512, 512), dtype="float32"); Tr_Label = np.zeros((16, 1, 512, 512), dtype="uint8"); Tr_weights = np.zeros((16, 1, 512, 512), dtype="float32");
train_data = ImdbData(Tr_Dat, Tr_Label, Tr_weights);
Which pytorch version should I use? And what is the solution to the error above?
Thanks
Thanks for your share. When I run this version (40ae1aa) code and I get the following error: `/userfolder/relaynet_pytorch/relaynet_pytorch/net_api/losses.py in forward(self, output, target, weights, ignore_index) 60 encoded_target[mask] = 0 61 else: ---> 62 encodedtarget.scatter(1, target.unsqueeze(1), 1) 63 64 if weights is None:
RuntimeError: invalid argument 3: Index tensor must have same dimensions as output tensor at /pytorch/aten/src/THC/generic/THCTensorScatterGather.cu:289 `
And the input data's size is [NumData, 1, rows, cols], the corresponding label's size is [NumData, 1,rows, cols]. The target of our is to segment 10 class.
I read the code and don't understand these code:
` relaynet_pytorch/relaynet_pytorch/net_api/losses.py
lin51:output = output.exp()
lin52:encoded_target = output.detach() * 0
line61:encodedtarget.scatter(1, target.unsqueeze(1), 1)
`
In the older versions, the implementation of DiceLoss is different from this version code. And the older vision as follows:
` class DiceLoss(_Loss):
`
So which is better ? or which is the right version for the "ReLayNet" paper ? Thanks a lot!