Open TuanBao0711 opened 7 months ago
Hi @TuanBao0711, thank you for opening this issue and sorry for the late reply. The issue might be related to PR #92. Can you try the PR and let us know if this fixes your problem?
Hi @Phil26AT and @TuanBao0711 , thanks for opening this issue, im doing the similar things and i tried PR #92, it still not fixed. here's my error message.
{ "name": "ValueError", "message": "not enough values to unpack (expected 3, got 2)", "stack": "--------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[1], line 50 48 start.record() 49 with torch.inference_mode(): ---> 50 matches01 = matcher({'image0': feats0, 'image1': feats1}) 51 torch.cuda.synchronize() 52 end.record()
File ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, *kwargs) 1496 # If we don't have any hooks, we want to skip the rest of the logic in 1497 # this function, and just call forward. 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1499 or _global_backward_pre_hooks or _global_backward_hooks 1500 or _global_forward_hooks or _global_forward_pre_hooks): -> 1501 return forward_call(args, **kwargs) 1502 # Do not call functions when jit is used 1503 full_backward_hooks, non_full_backward_hooks = [], []
File ~/LightGlue/lightglue/lightglue.py:465, in LightGlue.forward(self, data) 444 \"\"\" 445 Match keypoints and descriptors between two images 446 (...) 462 matches: List[[Si x 2]], scores: List[[Si]] 463 \"\"\" 464 with torch.autocast(enabled=self.conf.mp, device_type=\"cuda\"): --> 465 return self._forward(data)
File ~/LightGlue/lightglue/lightglue.py:473, in LightGlue.forward(self, data) 471 kpts0, kpts1 = data0[\"keypoints\"], data1[\"keypoints\"] 472 b, m, = kpts0.shape --> 473 b, n, _ = kpts1.shape 474 device = kpts0.device 475 size0, size1 = data0.get(\"image_size\"), data1.get(\"image_size\")
ValueError: not enough values to unpack (expected 3, got 2)" }
I', trying matching a Drone in one img to the video has the drone, but this not work, this work went matching this img and a img capture from video but when I can't matching img vs capture video frame. Some one help me.
Error: File "C:\Users\TuanBao\Desktop\My_Docs\CNTT\imgMatching\lightGlue\testlightglue.py", line 69, in
matches01 = matcher({'image0': feats0, 'image1': feats1})
File "C:\Users\TuanBao\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, *kwargs)
File "C:\Users\TuanBao\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(args, **kwargs)
File "C:\Users\TuanBao\Desktop\My_Docs\CNTT\imgMatching\lightGlue\LightGlue\lightglue\lightglue.py", line 463, in forward
return self._forward(data)
File "C:\Users\TuanBao\Desktop\My_Docs\CNTT\imgMatching\lightGlue\LightGlue\lightglue\lightglue.py", line 470, in forward
b, m, = kpts0.shape
ValueError: not enough values to unpack (expected 3, got 2)
this is my testlightglue.py scripts:
from lightglue import LightGlue, SuperPoint, DISK, SIFT, ALIKED from lightglue.utils import load_image, rbd, numpy_image_to_torch from lightglue import viz2d import cv2
extractor = SuperPoint(max_num_keypoints=2048).eval().cuda() # load the extractor matcher = LightGlue(features='superpoint').eval().cuda()
imgObject = cv2.imread('img/2.jpg') #the drone img image0 = numpy_image_to_torch(imgObject).cuda() feats0 = extractor.extract(image0)
cap = cv2.VideoCapture('video/RGB.mp4') #the drone video while cap.isOpened(): ret, frame = cap.read() image1 = numpy_image_to_torch(frame).cuda()
cap.release()