TreB1eN / InsightFace_Pytorch

Pytorch0.4.1 codes for InsightFace
MIT License
1.72k stars 418 forks source link

RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same #94

Open lusihua opened 4 years ago

lusihua commented 4 years ago

run get_aligned_face_from_mtcnn.ipynb error occurred

lusihua commented 4 years ago

RuntimeError Traceback (most recent call last)

in ----> 1 bounding_boxes, landmarks = detect_faces(img) ~/lusihua/facial_detect/insightface_pytorch/mtcnn_pytorch/src/detector.py in detect_faces(image, min_face_size, thresholds, nms_thresholds) 61 # run P-Net on different scales 62 for s in scales: ---> 63 boxes = run_first_stage(image, pnet, scale=s, threshold=thresholds[0]) 64 bounding_boxes.append(boxes) 65 ~/lusihua/facial_detect/insightface_pytorch/mtcnn_pytorch/src/first_stage.py in run_first_stage(image, net, scale, threshold) 33 img = torch.FloatTensor(_preprocess(img)).to(device) 34 with torch.no_grad(): ---> 35 output = net(img) 36 probs = output[1].cpu().data.numpy()[0, 1, :, :] 37 offsets = output[0].cpu().data.numpy() ~/anaconda3/envs/pytorch1/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 475 result = self._slow_forward(*input, **kwargs) 476 else: --> 477 result = self.forward(*input, **kwargs) 478 for hook in self._forward_hooks.values(): 479 hook_result = hook(self, input, result) ~/lusihua/facial_detect/insightface_pytorch/mtcnn_pytorch/src/get_nets.py in forward(self, x) 65 a: a float tensor with shape [batch_size, 2, h', w']. 66 """ ---> 67 x = self.features(x) 68 a = self.conv4_1(x) 69 b = self.conv4_2(x) ~/anaconda3/envs/pytorch1/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 475 result = self._slow_forward(*input, **kwargs) 476 else: --> 477 result = self.forward(*input, **kwargs) 478 for hook in self._forward_hooks.values(): 479 hook_result = hook(self, input, result) ~/anaconda3/envs/pytorch1/lib/python3.7/site-packages/torch/nn/modules/container.py in forward(self, input) 89 def forward(self, input): 90 for module in self._modules.values(): ---> 91 input = module(input) 92 return input 93 ~/anaconda3/envs/pytorch1/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 475 result = self._slow_forward(*input, **kwargs) 476 else: --> 477 result = self.forward(*input, **kwargs) 478 for hook in self._forward_hooks.values(): 479 hook_result = hook(self, input, result) ~/anaconda3/envs/pytorch1/lib/python3.7/site-packages/torch/nn/modules/conv.py in forward(self, input) 299 def forward(self, input): 300 return F.conv2d(input, self.weight, self.bias, self.stride, --> 301 self.padding, self.dilation, self.groups) 302 303 RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
chernbo commented 4 years ago

我也遇到这样的问题在test_on_images.ipynb

chocokassy commented 2 years ago

hello, did you solve it?

Morris88826 commented 2 months ago

I solved it by passing in the device as an argument for the run_first_stage function and adding .to(device) the img variable before feeding it into the network. I think their original code is used only for running on the CPU. To run the inference with GPU, you should modify this part of the run_first_stage function.

Screenshot 2024-05-13 at 6 11 52 AM