Closed Kaivalya192 closed 5 months ago
For the first frame you need to first run register to initialize the pose, see https://github.com/NVlabs/FoundationPose/blob/main/run_demo.py#L52
Traceback (most recent call last):
File "kaivalya.py", line 114, in
i am getting this error when the frame is registering for the first time
when i record the dataset with same depth and rgb format the code works on that you gave.
Resolved by dividing it by 1000 depth was in mm
depth_image = np.asanyarray(aligned_depth_frame.get_data())/1e3
How did you create the mask while using realsense?
How did you create the mask while using realsense?
I used XMem segmentation . live pose this my repo where I have implemented all thing and it works live with realsense camara.
Thank you for your response. However, I am facing an issue with estimation divergence when the object goes out of the camera's scene. Did you manage to resolve this issue?
For the first frame you need to first run register to initialize the pose, see https://github.com/NVlabs/FoundationPose/blob/main/run_demo.py#L52
Hi @wenbowen123, I am getting the same error as mentioned here for a custom object (run_demo.py works fine for the demo objects). I pass my custom object mesh file and test scene dir as arguments to run_demo.py, so this line is being run. I also record depth values in m instead of mm as suggested here. Could there be another source of this error?
Edit: When run with --debug 3, I get the same error as above: "ValueError: zero-size array to reduction operation maximum which has no identity".
I want to apply Foundation Pose model for realtime input and output This is the modified code :
in this code i am facing issue with this registering the first frame
pose = est.register(K=cam_K, rgb=color, depth=depth, ob_mask=mask, iteration=args.est_refine_iter)
i think masking logic working this is code for depth and rgb alignment
i am getting error like this