Closed KevinLi43 closed 1 year ago
Hello, I just encounter the issue you have. My error is "nll_loss_forward_reduce_cuda_kernel_2d_index" not implemented for 'Int',slightly different from yours. I change the datatype of inputs just add a single line. You can look at the example I throw below.
236 inputs, labels = data 237 inputs, labels = inputs.to(device), labels.to(device) 238 optimizer.zero_grad() 239 outputs = model(inputs) 240 labels=labels.to(torch.int64) 241 loss = criterion(outputs, labels)
The datatype of 'labels' will change into int64, and my training finally work. You can add the line 240 and change the datatype you want.(e.g. torch.float ,torch.float64 ,torch.short) Hope my method can work and solve your problem.
Hello, I just encounter the issue you have. My error is "nll_loss_forward_reduce_cuda_kernel_2d_index" not implemented for 'Int',slightly different from yours. I change the datatype of inputs just add a single line. You can look at the example I throw below.
236 inputs, labels = data 237 inputs, labels = inputs.to(device), labels.to(device) 238 optimizer.zero_grad() 239 outputs = model(inputs) 240 labels=labels.to(torch.int64) 241 loss = criterion(outputs, labels)
The datatype of 'labels' will change into int64, and my training finally work. You can add the line 240 and change the datatype you want.(e.g. torch.float ,torch.float64 ,torch.short) Hope my method can work and solve your problem.
Thanks for your reply, using your method my training finally runs, I'm truly grateful for your help!
Hello, I just encounter the issue you have. My error is "nll_loss_forward_reduce_cuda_kernel_2d_index" not implemented for 'Int',slightly different from yours. I change the datatype of inputs just add a single line. You can look at the example I throw below.
236 inputs, labels = data 237 inputs, labels = inputs.to(device), labels.to(device) 238 optimizer.zero_grad() 239 outputs = model(inputs) 240 labels=labels.to(torch.int64) 241 loss = criterion(outputs, labels)
The datatype of 'labels' will change into int64, and my training finally work. You can add the line 240 and change the datatype you want.(e.g. torch.float ,torch.float64 ,torch.short) Hope my method can work and solve your problem.
Hi, can you tell me where I have to add your suggested solution: 240 labels=labels.to(torch.int64)?
Hello, I just encounter the issue you have. My error is "nll_loss_forward_reduce_cuda_kernel_2d_index" not implemented for 'Int',slightly different from yours. I change the datatype of inputs just add a single line. You can look at the example I throw below. 236 inputs, labels = data 237 inputs, labels = inputs.to(device), labels.to(device) 238 optimizer.zero_grad() 239 outputs = model(inputs) 240 labels=labels.to(torch.int64) 241 loss = criterion(outputs, labels) The datatype of 'labels' will change into int64, and my training finally work. You can add the line 240 and change the datatype you want.(e.g. torch.float ,torch.float64 ,torch.short) Hope my method can work and solve your problem.
Thanks for your reply, using your method my training finally runs, I'm truly grateful for your help! Excuse me, I met the same question, But my model output and label dtype are the same type float,but it cannot work. Can you give me some hints?
Hello, I just encounter the issue you have. My error is "nll_loss_forward_reduce_cuda_kernel_2d_index" not implemented for 'Int',slightly different from yours. I change the datatype of inputs just add a single line. You can look at the example I throw below. 236 inputs, labels = data 237 inputs, labels = inputs.to(device), labels.to(device) 238 optimizer.zero_grad() 239 outputs = model(inputs) 240 labels=labels.to(torch.int64) 241 loss = criterion(outputs, labels) The datatype of 'labels' will change into int64, and my training finally work. You can add the line 240 and change the datatype you want.(e.g. torch.float ,torch.float64 ,torch.short) Hope my method can work and solve your problem.
Thanks for your reply, using your method my training finally runs, I'm truly grateful for your help! Excuse me, I met the same question, But my model output and label dtype are the same type float,but it cannot work. Can you give me some hints?
The label should be transformed to torch.long if you are using the CELoss, add this to your __getitem__(self, index):
label = torch.as_tensor(label, dtype=torch.long)
And it should work.
hello, thanks for your survey and the opensource for this toolbox, it helps me a lot for the understanding of watermarking schemes.
However, when i trained embed.py to embed a content watermark in to the cifar10 model, it throwed the following error :
"nll_loss_forward_reduce_cuda_kernel_2d_index" not implemented for 'float'
, it seems to occured when implementing the loss function nll_loss, but i have no idea how to solve it. Have you ever encountered this error when embedding watermarks? Any reply will be appreciated!