NVlabs / FoundationPose

[CVPR 2024 Highlight] FoundationPose: Unified 6D Pose Estimation and Tracking of Novel Objects
https://nvlabs.github.io/FoundationPose/
Other
1.01k stars 112 forks source link

My computer has multiple Gpus, but when I run run_demo.py with my own data, I get an error (my picture pixels are 3072*2048) #112

Open wsq1010 opened 3 weeks ago

wsq1010 commented 3 weeks ago

Traceback (most recent call last): File "run_demo_test.py", line 67, in pose = est.register(K=reader.K, rgb=color, depth=depth, ob_mask=mask, iteration=args.est_refine_iter) File "/home/bowen/FoundationPose/estimater.py", line 220, in register scores, vis = self.scorer.predict(mesh=self.mesh, rgb=rgb, depth=depth, K=K, ob_in_cams=poses.data.cpu().numpy(), normal_map=normal_map, mesh_tensors=self.mesh_tensors, glctx=self.glctx, mesh_diameter=self.diameter, get_vis=self.debug>=2) File "/opt/conda/envs/my/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "/home/bowen/FoundationPose/learning/training/predict_score.py", line 180, in predict pose_data = make_crop_data_batch(self.cfg.input_resize, ob_in_cams, mesh, rgb, depth, K, crop_ratio=self.cfg['crop_ratio'], glctx=glctx, mesh_tensors=mesh_tensors, dataset=self.dataset, cfg=self.cfg, mesh_diameter=mesh_diameter) File "/opt/conda/envs/my/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(args, **kwargs) File "/home/bowen/FoundationPose/learning/training/predict_score.py", line 110, in make_crop_data_batch pose_data = dataset.transform_batch(pose_data, H_ori=H, W_ori=W, bound=1) File "/home/bowen/FoundationPose/learning/datasets/h5_dataset.py", line 179, in transform_batch batch = self.transform_depth_to_xyzmap(batch, H_ori, W_ori, bound=bound) File "/home/bowen/FoundationPose/learning/datasets/h5_dataset.py", line 160, in transform_depth_to_xyzmap depthBs_ori = kornia.geometry.transform.warp_perspective(batch.depthBs.cuda().expand(bs,-1,-1,-1), crop_to_oris, dsize=(H_ori, W_ori), mode='nearest', align_corners=False) File "/opt/conda/envs/my/lib/python3.8/site-packages/kornia/geometry/transform/imgwarp.py", line 124, in warp_perspective grid = transform_points(src_norm_trans_dst_norm[:, None, None], grid) File "/opt/conda/envs/my/lib/python3.8/site-packages/kornia/geometry/linalg.py", line 190, in transform_points points_1_h = convert_points_to_homogeneous(points_1) # BxNxD+1 File "/opt/conda/envs/my/lib/python3.8/site-packages/kornia/geometry/conversions.py", line 204, in convert_points_to_homogeneous return pad(points, [0, 1], "constant", 1.0) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 17.72 GiB (GPU 0; 23.67 GiB total capacity; 12.84 GiB already allocated; 9.31 GiB free; 12.88 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

LiYan0306 commented 3 weeks ago

You may need resize your input images, I have resized my input as x0.3(2169*3840 RTX 4090).

wsq1010 commented 3 weeks ago

But our internal parameters and values are based on the original size. Will there be no other problems if we change them? Look forward to your answer

LiYan0306 commented 3 weeks ago

As follows are my resolution :

downscale is the key parament.

wsq1010 commented 3 weeks ago

This is what happens when I change it 1

apavani2 commented 3 weeks ago

@wsq1010 the scale of the mesh is incorrect most probably. Try running it with --debug 3 and then open up the scene_raw.ply or scene_complete.ply file from the debug folder and your original mesh file in blender. If the scale of the cad object isn't the same as the scale of your .ply, rescale your cad object to match the scale of the scene_raw.ply or scene_complete.ply. That should fix your problem

wsq1010 commented 1 week ago

Hello, may I ask if the pose we get after scaling is based on scaling or 3072*2048