Open RyuseiiSama opened 3 months ago
You can add this line before getting grasps.
scene_img = cv2.resize(scene_img, (480,132))
The demo scene image and real-world experiment scenes were dimensions of 640x480 pixels, I believe yours may be 1280x960. Could you try resizing it as @Sai-Yarlagadda suggested, but instead do it after converting the scene to RGB near the start of the cell and make the output size 640x480 like so;
scene_img = cv2.imread("../samples/test2.jpeg")
scene_img = cv2.cvtColor(scene_img, cv2.COLOR_BGR2RGB)
# added resizing below
scene_img = cv2.resize(scene_img, (640, 480))
If that doesn't work could you attach your test image.
Hello! I chanced upon this study and was just fiddling around trying to apply it to my own objects.
Steps taken: 1) Followed your instructions to setup 2) Ran os_tog.ipynb with your sample images, succeeded 3) Reran using my own sample images, each as .jpeg and .png (not sure if its relevant), particularly this cell:
4) Got this output:
And this error:
Images that appeared:
Wondering if it was meant to run with non-sample images? if so how may I (in future) come around to implement this?
Do note that i am EXTREMELY new to anything computer vision related. That being said, please throw any technicalities that resulted in this issue!
Thanks in advance :)