Open lingjiajie opened 8 months ago
loss = loss_func(y, output)
loss_np = np.array(loss.data)
np_output = np.array(output.data, copy=False)
loss 是QTensor变量。使用loss.to_numpy()转化为numpy @lingjiajie
并不是您提的这里有问题,我想应该是有问题
模型部分的代码如下: `
class Net(Module):
def __init__(self): super(Net, self).__init__() self.conv1 = Conv2D(input_channels=1, output_channels=6, kernel_size=(5, 5), stride=(1, 1), padding="valid") self.maxpool1 = MaxPool2D([2, 2], [2, 2], padding="valid") self.conv2 = Conv2D(input_channels=6, output_channels=16, kernel_size=(5, 5), stride=(1, 1), padding="valid") self.maxpool2 = MaxPool2D([2, 2], [2, 2], padding="valid") self.fc1 = Linear(input_channels=256, output_channels=64) self.fc2 = Linear(input_channels=64, output_channels=1) self.hybrid = Hybrid(np.pi / 2) self.fc3 = Linear(input_channels=1, output_channels=2) def forward(self, x): x = F.ReLu()(self.conv1(x)) # 1 6 24 24 x = self.maxpool1(x) x = F.ReLu()(self.conv2(x)) # 1 16 8 8 x = self.maxpool2(x) x = tensor.flatten(x, 1) # 1 256 x = F.ReLu()(self.fc1(x)) # 1 64 x = self.fc2(x) # 1 1 x = self.hybrid(x) x = self.fc3(x) return x
model = Net().toGPU()
`
@lingjiajie 可以直接附上你的完整代码么,我来看下。
@lingjiajie 你说的很对,那个示例不能很简单的改为GPU版本。 我这里给出那个示例使用GPU的代码,这个代码只能支持batch=1。 hybird_gpu_b1.txt 也可以使用QuantumLayerV2接口,封装的更加好一些,可以支持多种device以及batchsize: hybird_gpu_qlayer.txt
并不是您提的这里有问题,我想应该是有问题
模型部分的代码如下: `
class Net(Module):
def __init__(self): super(Net, self).__init__() self.conv1 = Conv2D(input_channels=1, output_channels=6, kernel_size=(5, 5), stride=(1, 1), padding="valid") self.maxpool1 = MaxPool2D([2, 2], [2, 2], padding="valid") self.conv2 = Conv2D(input_channels=6, output_channels=16, kernel_size=(5, 5), stride=(1, 1), padding="valid") self.maxpool2 = MaxPool2D([2, 2], [2, 2], padding="valid") self.fc1 = Linear(input_channels=256, output_channels=64) self.fc2 = Linear(input_channels=64, output_channels=1) self.hybrid = Hybrid(np.pi / 2) self.fc3 = Linear(input_channels=1, output_channels=2) def forward(self, x): x = F.ReLu()(self.conv1(x)) # 1 6 24 24 x = self.maxpool1(x) x = F.ReLu()(self.conv2(x)) # 1 16 8 8 x = self.maxpool2(x) x = tensor.flatten(x, 1) # 1 256 x = F.ReLu()(self.fc1(x)) # 1 64 x = self.fc2(x) # 1 1 x = self.hybrid(x) x = self.fc3(x) return x
model = Net().toGPU()
`
谢谢,十分感谢。
您的回复可以解决我的问题。
再次感谢您的帮助!
@lingjiajie 如果解决了您的问题,请close这个issue。
您好!
我是的代码是在案例“混合量子经典神经网络模型”的基础上进行修改的。修改的代码如下:
` x_train, y_train, x_test, y_test = data_select(1000, 100) model = Net().toGPU()
optimizer = Adam(model.parameters(), lr=0.005)
分类任务使用交叉熵函数
loss_func = CategoricalCrossEntropy()
训练次数
epochs = 10 train_loss_list = [] val_loss_list = [] train_acc_list =[]
val_acc_list = []
for epoch in range(1, epochs):
`
但是出现下面的问题:
terminate called after throwing an instance of 'std::invalid_argument' what(): tensor --> numpy device not supported, toCPU() first. Aborted (core dumped)
请问这是哪里出了问题?谢谢!