Closed seanytak closed 3 years ago
Hi! thanks for your contribution!, great first issue!
seems like the data are not synced correctly...
I think you should move the metric itself to cuda as well if you want to feed it data on GPU.
as @maximsch2 suggested you need to move the module to the same device as the input/target data. Internally, the confusion matrix will be by default in the cpu - that's why torch asserts a mismatch between devices. This the above code with the proper usage:
from torchmetrics import IoU
import torch
target = torch.randint(0, 2, (10, 25, 25))
pred = torch.tensor(target)
pred[2:5, 7:13, 9:15] = 1 - pred[2:5, 7:13, 9:15]
iou = IoU(num_classes=2).to("cuda")
iou(pred.to(torch.device("cuda")), target.to(torch.device("cuda")))
Closing the issue since it's not a bug, but an intended behavior.
Another possible solution could be to move the internal confusion matrix to the same device as the inputs. In that case, this should be discussed in a separated issue.
🐛 Bug
Hello,
When trying to utilize
torchmetrics.IoU
withpreds
andtargets
tensors on the GPU, I receive the following errorTo Reproduce
Modified from the example to have pred and tensor on GPU
Code Sample
Expected behavior
Metric should compute as normal
Environment
conda
,pip
, source): pipAdditional context
Problem appears to be due to default state of
confmat
in classConfusionMatrix
always creatingdefault
on CPUHappy to submit a PR regarding the change but am not sure what would be best in line with the API signature