dnn-security / Watermark-Robustness-Toolbox

The official implementation of the IEEE S&P`22 paper "SoK: How Robust is Deep Neural Network Image Classification Watermarking".
GNU General Public License v3.0
107 stars 27 forks source link

nll_loss_forward_reduce_cuda_kernel_2d_index" not implemented for 'float' #2

Closed KevinLi43 closed 1 year ago

KevinLi43 commented 2 years ago

hello, thanks for your survey and the opensource for this toolbox, it helps me a lot for the understanding of watermarking schemes.

However, when i trained embed.py to embed a content watermark in to the cifar10 model, it throwed the following error : "nll_loss_forward_reduce_cuda_kernel_2d_index" not implemented for 'float', it seems to occured when implementing the loss function nll_loss, but i have no idea how to solve it. Have you ever encountered this error when embedding watermarks? Any reply will be appreciated!

hfg987654321 commented 2 years ago

Hello, I just encounter the issue you have. My error is "nll_loss_forward_reduce_cuda_kernel_2d_index" not implemented for 'Int',slightly different from yours. I change the datatype of inputs just add a single line. You can look at the example I throw below.

236 inputs, labels = data 237 inputs, labels = inputs.to(device), labels.to(device) 238 optimizer.zero_grad() 239 outputs = model(inputs) 240 labels=labels.to(torch.int64) 241 loss = criterion(outputs, labels)

The datatype of 'labels' will change into int64, and my training finally work. You can add the line 240 and change the datatype you want.(e.g. torch.float ,torch.float64 ,torch.short) Hope my method can work and solve your problem.

KevinLi43 commented 2 years ago

Hello, I just encounter the issue you have. My error is "nll_loss_forward_reduce_cuda_kernel_2d_index" not implemented for 'Int',slightly different from yours. I change the datatype of inputs just add a single line. You can look at the example I throw below.

236 inputs, labels = data 237 inputs, labels = inputs.to(device), labels.to(device) 238 optimizer.zero_grad() 239 outputs = model(inputs) 240 labels=labels.to(torch.int64) 241 loss = criterion(outputs, labels)

The datatype of 'labels' will change into int64, and my training finally work. You can add the line 240 and change the datatype you want.(e.g. torch.float ,torch.float64 ,torch.short) Hope my method can work and solve your problem.

Thanks for your reply, using your method my training finally runs, I'm truly grateful for your help!

XxrxX-eth commented 1 year ago

Hello, I just encounter the issue you have. My error is "nll_loss_forward_reduce_cuda_kernel_2d_index" not implemented for 'Int',slightly different from yours. I change the datatype of inputs just add a single line. You can look at the example I throw below.

236 inputs, labels = data 237 inputs, labels = inputs.to(device), labels.to(device) 238 optimizer.zero_grad() 239 outputs = model(inputs) 240 labels=labels.to(torch.int64) 241 loss = criterion(outputs, labels)

The datatype of 'labels' will change into int64, and my training finally work. You can add the line 240 and change the datatype you want.(e.g. torch.float ,torch.float64 ,torch.short) Hope my method can work and solve your problem.

Hi, can you tell me where I have to add your suggested solution: 240 labels=labels.to(torch.int64)?

DM0815 commented 1 year ago

Hello, I just encounter the issue you have. My error is "nll_loss_forward_reduce_cuda_kernel_2d_index" not implemented for 'Int',slightly different from yours. I change the datatype of inputs just add a single line. You can look at the example I throw below. 236 inputs, labels = data 237 inputs, labels = inputs.to(device), labels.to(device) 238 optimizer.zero_grad() 239 outputs = model(inputs) 240 labels=labels.to(torch.int64) 241 loss = criterion(outputs, labels) The datatype of 'labels' will change into int64, and my training finally work. You can add the line 240 and change the datatype you want.(e.g. torch.float ,torch.float64 ,torch.short) Hope my method can work and solve your problem.

Thanks for your reply, using your method my training finally runs, I'm truly grateful for your help! Excuse me, I met the same question, But my model output and label dtype are the same type float,but it cannot work. Can you give me some hints?

HaoyuCui commented 11 months ago

Hello, I just encounter the issue you have. My error is "nll_loss_forward_reduce_cuda_kernel_2d_index" not implemented for 'Int',slightly different from yours. I change the datatype of inputs just add a single line. You can look at the example I throw below. 236 inputs, labels = data 237 inputs, labels = inputs.to(device), labels.to(device) 238 optimizer.zero_grad() 239 outputs = model(inputs) 240 labels=labels.to(torch.int64) 241 loss = criterion(outputs, labels) The datatype of 'labels' will change into int64, and my training finally work. You can add the line 240 and change the datatype you want.(e.g. torch.float ,torch.float64 ,torch.short) Hope my method can work and solve your problem.

Thanks for your reply, using your method my training finally runs, I'm truly grateful for your help! Excuse me, I met the same question, But my model output and label dtype are the same type float,but it cannot work. Can you give me some hints?

The label should be transformed to torch.long if you are using the CELoss, add this to your __getitem__(self, index): label = torch.as_tensor(label, dtype=torch.long) And it should work.