Open tjf801 opened 4 years ago
Full stack traceback:
GAN>py MNIST_GAN.py
True cuda:0
Traceback (most recent call last):
File "MNIST_GAN.py", line 195, in <module>
fake_data = generator(noise(real_data.size(0))).detach()
File "C:\Program Files\Python37\lib\site-packages\torch\nn\modules\module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "MNIST_GAN.py", line 92, in forward
x = self.hidden0(x)
File "C:\Program Files\Python37\lib\site-packages\torch\nn\modules\module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "C:\Program Files\Python37\lib\site-packages\torch\nn\modules\container.py", line 92, in forward
input = module(input)
File "C:\Program Files\Python37\lib\site-packages\torch\nn\modules\module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "C:\Program Files\Python37\lib\site-packages\torch\nn\modules\linear.py", line 87, in forward
return F.linear(input, self.weight, self.bias)
File "C:\Program Files\Python37\lib\site-packages\torch\nn\functional.py", line 1370, in linear
ret = torch.addmm(bias, input, weight.t())
RuntimeError: Expected object of device type cuda but got device type cpu for argument #2 'mat1' in call to _th_addmm
I am running my code using (anaconda3) and (pycharm or jupyter notebook) under window 10 64bit Sometime in jupyter works but not in pycharm inspite of same code and same anaconda environment.
My current problem is introduced below:
I encountered the same problem, when I run my code with
model.cuda()
But running without it is no problem.
This mean I couldn't run my code with GPU, inspite of the
torch.cuda.is_available()
tells me True.
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import DataLoader, TensorDataset
from sklearn.datasets import load_wine
from sklearn.model_selection import train_test_split
import pandas as pd
wine = load_wine()
wine
pd.DataFrame(wine.data,columns = wine.feature_names)
wine.target
wine_data = wine.data[0:130]
wine_target = wine.target[0:130]
train_X,test_X,train_Y,test_Y = train_test_split(wine_data,wine_target, test_size = 0.2)
print(len(train_X))
print(len(test_X))
train_X = torch.from_numpy(train_X).float()
train_Y = torch.from_numpy(train_Y).long()
test_X = torch.from_numpy(test_X).float()
test_Y = torch.from_numpy(test_Y).long()
print(train_X.shape)
print(train_Y.shape)
train=TensorDataset(train_X,train_Y)
print(train[0])
train_loader = DataLoader(train,batch_size=16,shuffle=True)
class Net(nn.Module):
def __init__(self):
super(Net,self).__init__()
self.fc1 = nn.Linear(13,96)
self.fc2 = nn.Linear(96,2)
def forward(self,x):
x=F.relu(self.fc1(x))
x=self.fc2(x)
return F.log_softmax(x)
model = Net()
model.cuda() #★★★★★
#print(torch.cuda.is_available())
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(),lr = 0.01)
for epoch in range(300):
total_loss = 0
for train_x,train_y in train_loader:
train_x,train_y = Variable(train_x),Variable(train_y)
optimizer.zero_grad()
output = model(train_x)
loss = criterion(output, train_y)
loss.backward()
optimizer.step()
total_loss += loss.item()
if (epoch+1)%50==0:
print(epoch+1, total_loss)
test_x,test_y = Variable(test_X),Variable(test_Y)
result = torch.max(model(test_x).data,1)[1]
accuracy = sum(test_y.data.numpy()==result.numpy())/len(test_y.data.numpy())
print(accuracy)
Traceback (most recent call last):
File "C:\Users\Anaconda3\envs\myconda\lib\site-packages\IPython\core\interactiveshell.py", line 3326, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "
File "C:\Users\Anaconda3\envs\myconda\lib\site-packages\torch\nn\modules\module.py", line 541, in call
result = self.forward(*input, *kwargs)
File "C:/Users/Desktop/python_test/test.py", line 49, in forward
x=F.relu(self.fc1(x))
File "C:\Users\Anaconda3\envs\myconda\lib\site-packages\torch\nn\modules\module.py", line 541, in call
result = self.forward(input, **kwargs)
File "C:\Users\Anaconda3\envs\myconda\lib\site-packages\torch\nn\modules\linear.py", line 87, in forward
return F.linear(input, self.weight, self.bias)
File "C:\Users\Anaconda3\envs\myconda\lib\site-packages\torch\nn\functional.py", line 1370, in linear
ret = torch.addmm(bias, input, weight.t())
RuntimeError: Expected object of device type cuda but got device type cpu for argument #2 'mat1' in call to _th_addmm
I NEED SOLUTIONS!!! I HOPE SOMEBODY to HAVE SOLUTIONS
Check List
Reassign the model and data to gpu
train_x = train_x.cuda() # train_x.to(device)
Reassign the model and data to cpuresult = result.cpu()
I am also facing this same issue, does anyone have a solution to it??
new_layer=new_layer.cuda()
Did someone get a solution for this?
@liangjiubujiu Can you specify what you mean?
It used to work in previous versions of PyTorch. Can you send a fix or explain to me what you did to help the rest?
@codeprb @Pratikrocks I am checking the solution, you can try running it on CPU if you remove all the if torch.cuda.is_available(): <variable>.cuda()
statements in the meantime
Each tensor (input/custom intermediate ones created for project specific purpose) should be moved to the device in use - as mentioned in a previous reply by @liangjiubujiu
new_layer=new_layer.cuda()
Hello, did u fix your error? I have the same problem when I used nn.linear(), i tested all my data that are exactly on my gpu while it raised
Tensor for argument #2 'mat1' is on CPU, but expected it to be on GPU (while checking arguments for addmm)
The reason might be you are calculating Tensors from different devices. I got this problem when CrossEntropyLoss cpu and cuda Tensors :)
use tensor1.cuda()
to convert a cpu tensor to cuda tensor (you got to check which one is fetched from cpu device)
I am trying to run this program, but it is returning
RuntimeError: Expected object of device type cuda but got device type cpu for argument #2 'mat1' in call to _th_addmm
.Note: I am using the notebook file as an actual python file.