I get an error if I have cuda available on my computer.
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)
Nothing changes if I replace device="cuda" to device="cpu" we still have the same error.
The full stack trace of the error:
Traceback (most recent call last):
File "/home/mikhail/source/presto_features/main.py", line 188, in <module>
process_tile(
File "/home/mikhail/source/presto_features/main.py", line 156, in process_tile
pretrained_model.encoder(
File "/home/mikhail/.cache/pypoetry/virtualenvs/presto-features-bmBP-FwO-py3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/mikhail/.cache/pypoetry/virtualenvs/presto-features-bmBP-FwO-py3.10/lib/python3.10/site-packages/presto/presto.py", line 415, in forward
tokens = self.eo_patch_embed[channel_group](x[:, :, channel_idxs])
File "/home/mikhail/.cache/pypoetry/virtualenvs/presto-features-bmBP-FwO-py3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/mikhail/.cache/pypoetry/virtualenvs/presto-features-bmBP-FwO-py3.10/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)
I get an error if I have cuda available on my computer.
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)
All values that I pass to the encoder are on the same device as you can see from the code. Here's the output of the printed debug messages:
Nothing changes if I replace device="cuda" to device="cpu" we still have the same error.
The full stack trace of the error: