Open hamster-with-human-hands opened 1 year ago
Hi @hamster-with-human-hands unfortunately we're not able to reproduce this issue using device='auto'
, 'cpu'
, or 'cuda'
Hi @hamster-with-human-hands unfortunately we're not able to reproduce this issue using
device='auto'
,'cpu'
, or'cuda'
The error is raised (at least on my machine) when using more than one GPU kernel. I suppose @hamster-with-human-hands has a similar issue.
Thanks for the comment @andreaskuepfer, have you found a solution that works on your system?
We have one machine with 2 GPUs and will check to see if we can replicate this issue on that system. I believe we've been using other systems for most of our testing.
No, I haven't found a solution that works for me. I would be happy to see whether you can replicate the issue and/or find a solution because this should definitely speed up inference (especially on detect_video()).
I face the same issue! My Pytorch supports GPU access. I have two devices.
from feat import Detector
detector = Detector(
face_model="retinaface",
landmark_model="mobilefacenet",
au_model='xgb',
emotion_model="resmasknet",
facepose_model="img2pose",
device='cuda')
When I set this and run the demo code given by you guys. I face the same issue!
single_face_prediction = detector.detect_image(single_face_img_path)
# Show results
single_face_prediction
in normalize
def torch_choice(self, k: List[int]) -> int:
RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 0
when my code is: ``` from feat import Detector detector = Detector(device='auto') detector
from feat.utils.io import get_test_data_path import os
test_data_dir = get_test_data_path() test_video_path = os.path.join(test_data_dir, "WolfgangLanger_Pexels.mp4") video_prediction = detector.detect_video(test_video_path) video_prediction.head()
from IPython.core.display import Video Video(test_video_path, embed=True)```
I encounter this error RuntimeError: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/calvin/anaconda3/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 64, in _worker output = module(*input, kwargs) File "/home/calvin/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "/home/calvin/KangMin/py-feat-2/feat/facepose_detectors/img2pose/deps/generalized_rcnn.py", line 59, in forward images, targets = self.transform(images, targets) File "/home/calvin/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/home/calvin/anaconda3/lib/python3.8/site-packages/torchvision/models/detection/transform.py", line 129, in forward image = self.normalize(image) File "/home/calvin/anaconda3/lib/python3.8/site-packages/torchvision/models/detection/transform.py", line 157, in normalize return (image - mean[:, None, None]) / std[:, None, None] RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 0
Is my code on line 2 wrong? I am a newbie, so apologies and thanks in advance