ammar-n-abbas / FoundationPoseROS2

FoundationPoseROS2 is a ROS2-integrated system for 6D object pose estimation and tracking, based on the FoundationPose architecture. It uses RealSense2 with the Segment Anything Model 2 (SAM2) framework for end-to-end, model-based, real-time pose estimation and tracking of novel objects.
MIT License
22 stars 8 forks source link

Provide rosbag for reproduction #6

Closed mcres closed 1 week ago

mcres commented 1 week ago

Like #3, I cannot detect the pose of a custom object.

It would be nice to reproduce the results presented here in my own setup, to make sure that there are no issues with dependencies or whatsoever.

Could you provide a rosbag with the color and depth images, and the camera info so that we can detect the objects defined in demo_data/? This way I could check that my pipeline has been built correctly.

ammar-n-abbas commented 1 week ago

Thank you for the suggestion. The Readme is now updated with the rosbag demo

mcres commented 1 week ago

Thanks! I'm posting the command with the needed remappings for the default /pose_estimation_node subscription topics, in case someone is lost with that:

ros2 bag play cube_demo_data_rosbag2 --remap /camera/camera/aligned_depth_to_color/image_raw:=/camera/aligned_depth_to_color/image_raw /camera/camera/color/image_raw:=/camera/color/image_raw /camera/camera/color/camera_info:=/camera/color/camera_info
mcres commented 1 week ago

Unfortunately, I am not able to reproduce the pose estimation of the cube.

After running the foundationpose_ros_multi.py script and the rosbag, clicking on the cube and pressing Enter I get the following error:

Click to show error 1 ```bash [INFO] [1731527852.431736667] [pose_estimation_node]: Object 0 selected. [reset_object()] self.diameter:0.1299038105676658, vox_size:0.006495190528383291 [reset_object()] self.pts:torch.Size([8, 3]) [reset_object()] reset done [make_rotation_grid()] cam_in_obs:(42, 4, 4) [make_rotation_grid()] rot_grid:(252, 4, 4) num original candidates = 252 num of pose after clustering: 252 [make_rotation_grid()] after cluster, rot_grid:(252, 4, 4) [make_rotation_grid()] self.rot_grid: torch.Size([252, 4, 4]) Traceback (most recent call last): File "/home/martinho/acceleration/foundation_pose_ws/src_2/FoundationPoseROS2/foundationpose_ros_multi.py", line 377, in main() File "/home/martinho/acceleration/foundation_pose_ws/src_2/FoundationPoseROS2/foundationpose_ros_multi.py", line 372, in main rclpy.spin(node) File "/opt/ros/humble/local/lib/python3.10/dist-packages/rclpy/__init__.py", line 222, in spin executor.spin_once() File "/opt/ros/humble/local/lib/python3.10/dist-packages/rclpy/executors.py", line 739, in spin_once self._spin_once_impl(timeout_sec) File "/opt/ros/humble/local/lib/python3.10/dist-packages/rclpy/executors.py", line 736, in _spin_once_impl raise handler.exception() File "/opt/ros/humble/local/lib/python3.10/dist-packages/rclpy/task.py", line 239, in __call__ self._handler.send(None) File "/opt/ros/humble/local/lib/python3.10/dist-packages/rclpy/executors.py", line 437, in handler await call_coroutine(entity, arg) File "/opt/ros/humble/local/lib/python3.10/dist-packages/rclpy/executors.py", line 362, in _execute_subscription await await_or_execute(sub.callback, msg) File "/opt/ros/humble/local/lib/python3.10/dist-packages/rclpy/executors.py", line 107, in await_or_execute return callback(*args) File "/home/martinho/acceleration/foundation_pose_ws/src_2/FoundationPoseROS2/foundationpose_ros_multi.py", line 147, in depth_callback self.process_images() File "/home/martinho/acceleration/foundation_pose_ws/src_2/FoundationPoseROS2/foundationpose_ros_multi.py", line 294, in process_images self.meshes = [self.meshes[idx] for idx in selected_indices] File "/home/martinho/acceleration/foundation_pose_ws/src_2/FoundationPoseROS2/foundationpose_ros_multi.py", line 294, in self.meshes = [self.meshes[idx] for idx in selected_indices] IndexError: list index out of range ```

If I comment the lines that cause the error (not sure why it is needed to set self.meshes and self.bounds at this point, anyways), I am able to not make it crash. However, no pose is detected as indicated by a) the visualizer and b) the ROS2 node not publishing any pose:

Mask selection screen Pose estimation screen :x:
Screenshot from 2024-11-13 20-51-35 Screenshot from 2024-11-13 20-51-39

Here are the messages logged to the console:

Click to show error 2 ```bash (foundationpose_ros_2) martinho@acceleration-workstation:~/acceleration/foundation_pose_ws/src_2/FoundationPoseROS2$ python3 foundationpose_ros_multi.py /home/martinho/.local/lib/python3.10/site-packages/matplotlib/projections/__init__.py:63: UserWarning: Unable to import Axes3D. This may be due to multiple versions of Matplotlib being installed (e.g. as a system package and as a pip package). As a result, the 3D projection is not available. warnings.warn("Unable to import Axes3D. This may be due to multiple versions of " Warp 1.3.1 initialized: CUDA Toolkit 12.5, Driver 12.7 Devices: "cpu" : "x86_64" "cuda:0" : "NVIDIA GeForce RTX 3060" (12 GiB, sm_86, mempool enabled) Kernel cache: /home/martinho/.cache/warp/1.3.1 [__init__()] self.cfg: lr: 0.0001 c_in: 6 zfar: 'Infinity' debug: null n_view: 1 run_id: 3wy8qqex use_BN: true exp_name: 2024-01-11-20-02-45 n_epochs: 62 save_dir: /home/bowenw/debug/2024-01-11-20-02-45/ use_mask: false loss_type: pairwise_valid optimizer: adam batch_size: 64 crop_ratio: 1.1 enable_amp: true use_normal: false max_num_key: null warmup_step: -1 input_resize: - 160 - 160 max_step_val: 1000 vis_interval: 1000 weight_decay: 0 normalize_xyz: true resume_run_id: null clip_grad_norm: 'Infinity' lr_epoch_decay: 500 render_backend: nvdiffrast train_num_pair: 5 lr_decay_epochs: - 50 n_epochs_warmup: 1 make_pair_online: false gradient_max_norm: 'Infinity' max_step_per_epoch: 10000 n_rendering_workers: 1 save_epoch_interval: 100 n_dataloader_workers: 100 split_objects_across_gpus: true ckpt_dir: /home/martinho/acceleration/foundation_pose_ws/src_2/FoundationPoseROS2/FoundationPose/learning/training/../../weights/2024-01-11-20-02-45/model_best.pth [__init__()] self.h5_file:None [__init__()] Using pretrained model from /home/martinho/acceleration/foundation_pose_ws/src_2/FoundationPoseROS2/FoundationPose/learning/training/../../weights/2024-01-11-20-02-45/model_best.pth [__init__()] init done [__init__()] welcome [__init__()] self.cfg: lr: 0.0001 c_in: 6 zfar: .inf debug: null w_rot: 0.1 n_view: 1 run_id: null use_BN: true rot_rep: axis_angle ckpt_dir: /home/martinho/acceleration/foundation_pose_ws/src_2/FoundationPoseROS2/FoundationPose/learning/training/../../weights/2023-10-28-18-33-37/model_best.pth exp_name: 2023-10-28-18-33-37 save_dir: /tmp/2023-10-28-18-33-37/ loss_type: l2 optimizer: adam trans_rep: tracknet batch_size: 64 crop_ratio: 1.2 use_normal: false BN_momentum: 0.1 max_num_key: null warmup_step: -1 input_resize: - 160 - 160 max_step_val: 1000 normal_uint8: false vis_interval: 1000 weight_decay: 0 n_max_objects: null normalize_xyz: true clip_grad_norm: 'Infinity' rot_normalizer: 0.3490658503988659 trans_normalizer: - 0.019999999552965164 - 0.019999999552965164 - 0.05000000074505806 max_step_per_epoch: 25000 val_epoch_interval: 10 n_dataloader_workers: 60 enable_amp: true use_mask: false [__init__()] self.h5_file: [__init__()] Using pretrained model from /home/martinho/acceleration/foundation_pose_ws/src_2/FoundationPoseROS2/FoundationPose/learning/training/../../weights/2023-10-28-18-33-37/model_best.pth [__init__()] init done [INFO] [1731527445.459141905] [pose_estimation_node]: Camera intrinsic matrix initialized: [[ 608.77 0 323.64] [ 0 607.41 240.36] [ 0 0 1]] 0: 1024x1024 1 0, 1 1, 1 2, 1 3, 1 4, 1 5, 1 6, 1 7, 1 8, 1 9, 1 10, 1 11, 1 12, 1 13, 5292.5ms Speed: 9.7ms preprocess, 5292.5ms inference, 0.6ms postprocess per image at shape (1, 3, 1024, 1024) [INFO] [1731527453.104550673] [pose_estimation_node]: Object 0 selected. [reset_object()] self.diameter:0.1299038105676658, vox_size:0.006495190528383291 [reset_object()] self.pts:torch.Size([8, 3]) [reset_object()] reset done [make_rotation_grid()] cam_in_obs:(42, 4, 4) [make_rotation_grid()] rot_grid:(252, 4, 4) num original candidates = 252 num of pose after clustering: 252 [make_rotation_grid()] after cluster, rot_grid:(252, 4, 4) [make_rotation_grid()] self.rot_grid: torch.Size([252, 4, 4]) selected_indices = [0] len(self.meshes) = 0 [register()] Welcome Module Utils 64702ec load on device 'cuda:0' took 0.28 ms (cached) [register()] poses:(252, 4, 4) [register()] after viewpoint, add_errs min:-1.0 /usr/local/lib/python3.10/dist-packages/torch/__init__.py:614: UserWarning: torch.set_default_tensor_type() is deprecated as of PyTorch 2.1, please use torch.set_default_dtype() and torch.set_default_device() as alternatives. (Triggered internally at ../torch/csrc/tensor/python_tensor.cpp:451.) _C._set_default_tensor_type(t) [predict()] ob_in_cams:(252, 4, 4) [predict()] self.cfg.use_normal:False [predict()] trans_normalizer:[0.019999999552965164, 0.019999999552965164, 0.05000000074505806], rot_normalizer:0.3490658503988659 [predict()] making cropped data [make_crop_data_batch()] Welcome make_crop_data_batch [make_crop_data_batch()] make tf_to_crops done [make_crop_data_batch()] render done /usr/local/lib/python3.10/dist-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3526.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] [make_crop_data_batch()] warp done [make_crop_data_batch()] pose batch data done [predict()] forward start [predict()] forward done [predict()] making cropped data [make_crop_data_batch()] Welcome make_crop_data_batch [make_crop_data_batch()] make tf_to_crops done [make_crop_data_batch()] render done [make_crop_data_batch()] warp done [make_crop_data_batch()] pose batch data done [predict()] forward start [predict()] forward done [predict()] making cropped data [make_crop_data_batch()] Welcome make_crop_data_batch [make_crop_data_batch()] make tf_to_crops done [make_crop_data_batch()] render done [make_crop_data_batch()] warp done [make_crop_data_batch()] pose batch data done [predict()] forward start [predict()] forward done [predict()] making cropped data [make_crop_data_batch()] Welcome make_crop_data_batch [make_crop_data_batch()] make tf_to_crops done [make_crop_data_batch()] render done [make_crop_data_batch()] warp done [make_crop_data_batch()] pose batch data done [predict()] forward start [predict()] forward done [predict()] ob_in_cams:(252, 4, 4) [predict()] self.cfg.use_normal:False [predict()] making cropped data [make_crop_data_batch()] Welcome make_crop_data_batch [make_crop_data_batch()] make tf_to_crops done [make_crop_data_batch()] render done [make_crop_data_batch()] pose batch data done [find_best_among_pairs()] pose_data.rgbAs.shape[0]: 252 [predict()] forward done [register()] final, add_errs min:-1.0 [register()] sort ids:tensor([ 2, 11, 17, 20, 0, 9, 28, 29, 37, 38, 57, 60, 95, 109, 116, 126, 142, 147, 179, 185, 189, 206, 212, 216, 222, 26, 30, 32, 34, 35, 41, 43, 44, 45, 47, 48, 55, 56, 59, 62, 64, 65, 69, 87, 88, 101, 107, 120, 121, 134, 144, 150, 151, 153, 154, 156, 157, 160, 162, 164, 168, 169, 170, 171, 173, 174, 175, 177, 181, 183, 184, 186, 187, 188, 191, 195, 198, 200, 201, 202, 203, 204, 207, 208, 210, 211, 214, 219, 220, 221, 225, 228, 229, 231, 232, 235, 237, 238, 243, 245, 246, 5, 8, 14, 15, 18, 23, 24, 39, 54, 63, 84, 92, 98, 102, 104, 106, 108, 112, 119, 123, 127, 128, 129, 131, 137, 139, 141, 146, 158, 159, 172, 190, 192, 193, 196, 199, 217, 218, 227, 234, 239, 247, 249, 250, 3, 4, 6, 7, 12, 13, 21, 22, 27, 33, 36, 42, 72, 75, 78, 81, 85, 86, 91, 93, 99, 103, 105, 114, 118, 124, 125, 130, 132, 16, 19, 50, 51, 53, 66, 68, 71, 73, 76, 79, 82, 89, 97, 111, 122, 136, 138, 152, 155, 161, 163, 166, 230, 233, 241, 244, 1, 10, 52, 67, 148, 223, 236, 145, 226, 178, 205, 110, 143, 77, 74, 80, 83, 113, 140, 194, 251, 197, 248, 182, 215, 242, 167, 25, 40, 31, 46, 49, 70, 180, 213, 90, 96, 117, 135, 165, 240, 58, 61, 133, 94, 100, 115, 149, 176, 209, 224]) [register()] sorted scores:tensor([89.1406, 89.1406, 89.1328, 89.1328, 89.1250, 89.1250, 89.1250, 89.1250, 89.1250, 89.1250, 89.1250, 89.1250, 89.1250, 89.1250, 89.1250, 89.1250, 89.1250, 89.1250, 89.1250, 89.1250, 89.1250, 89.1250, 89.1250, 89.1250, 89.1250, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1172, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1094, 89.1016, 89.1016, 89.1016, 89.1016, 89.1016, 89.1016, 89.1016, 89.1016, 89.1016, 89.1016, 89.1016, 89.1016, 89.1016, 89.1016, 89.1016, 89.1016, 89.1016, 89.1016, 89.1016, 89.1016, 89.1016, 89.1016, 89.1016, 89.1016, 89.1016, 89.1016, 89.1016, 89.1016, 89.1016, 89.0938, 89.0938, 89.0938, 89.0938, 89.0938, 89.0938, 89.0938, 89.0938, 89.0938, 89.0938, 89.0938, 89.0938, 89.0938, 89.0938, 89.0938, 89.0938, 89.0938, 89.0938, 89.0938, 89.0938, 89.0938, 89.0938, 89.0938, 89.0938, 89.0938, 89.0938, 89.0938, 89.0859, 89.0859, 89.0859, 89.0859, 89.0859, 89.0859, 89.0859, 89.0703, 89.0703, 89.0391, 89.0391, 88.9844, 88.9844, 88.9688, 88.9609, 88.9609, 88.9609, 88.9609, 88.9609, 88.9609, 88.9609, 88.9531, 88.9453, 88.8906, 88.8906, 88.7500, 88.7422, 88.6406, 88.6406, 88.6250, 88.6250, 88.6250, 88.6250, 88.6172, 88.6172, 88.6016, 88.6016, 88.6016, 88.6016, 88.6016, 88.5859, 88.5703, 88.5703, 88.5703, 88.5625, 88.5625, 88.5625, 88.5469, 88.5469, 88.5469, 88.5469]) [register()] Welcome [register()] poses:(252, 4, 4) [register()] after viewpoint, add_errs min:-1.0 [predict()] ob_in_cams:(252, 4, 4) [predict()] self.cfg.use_normal:False [predict()] trans_normalizer:[0.019999999552965164, 0.019999999552965164, 0.05000000074505806], rot_normalizer:0.3490658503988659 [predict()] making cropped data [make_crop_data_batch()] Welcome make_crop_data_batch [make_crop_data_batch()] make tf_to_crops done [make_crop_data_batch()] render done [make_crop_data_batch()] warp done [make_crop_data_batch()] pose batch data done [predict()] forward start [predict()] forward done [predict()] making cropped data [make_crop_data_batch()] Welcome make_crop_data_batch [make_crop_data_batch()] make tf_to_crops done [make_crop_data_batch()] render done [make_crop_data_batch()] warp done [make_crop_data_batch()] pose batch data done [predict()] forward start [predict()] forward done [predict()] making cropped data [make_crop_data_batch()] Welcome make_crop_data_batch [make_crop_data_batch()] make tf_to_crops done [make_crop_data_batch()] render done [make_crop_data_batch()] warp done [make_crop_data_batch()] pose batch data done [predict()] forward start [predict()] forward done [predict()] making cropped data [make_crop_data_batch()] Welcome make_crop_data_batch [make_crop_data_batch()] make tf_to_crops done [make_crop_data_batch()] render done [make_crop_data_batch()] warp done [make_crop_data_batch()] pose batch data done [predict()] forward start [predict()] forward done [predict()] ob_in_cams:(252, 4, 4) [predict()] self.cfg.use_normal:False [predict()] making cropped data [make_crop_data_batch()] Welcome make_crop_data_batch [make_crop_data_batch()] make tf_to_crops done [make_crop_data_batch()] render done [make_crop_data_batch()] pose batch data done [find_best_among_pairs()] pose_data.rgbAs.shape[0]: 252 [predict()] forward done [register()] final, add_errs min:-1.0 [register()] sort ids:tensor([ 0, 4, 7, 9, 13, 14, 22, 23, 26, 27, 28, 29, 32, 33, 36, 37, 38, 41, 42, 47, 54, 57, 60, 63, 75, 78, 81, 86, 88, 92, 93, 95, 98, 104, 105, 109, 114, 116, 119, 121, 125, 126, 131, 132, 137, 142, 147, 153, 154, 157, 160, 162, 165, 169, 171, 179, 180, 183, 185, 186, 187, 189, 195, 198, 201, 202, 206, 210, 212, 213, 216, 219, 220, 222, 225, 228, 229, 235, 238, 240, 243, 246, 3, 5, 6, 8, 12, 15, 16, 18, 19, 21, 24, 30, 34, 35, 39, 43, 44, 45, 48, 50, 51, 52, 53, 55, 56, 59, 62, 64, 65, 66, 67, 68, 69, 71, 72, 73, 76, 79, 82, 84, 85, 87, 89, 91, 97, 99, 101, 102, 103, 106, 107, 108, 111, 112, 118, 120, 122, 123, 124, 127, 128, 129, 130, 134, 138, 139, 141, 144, 146, 148, 150, 151, 152, 156, 158, 159, 161, 163, 164, 166, 168, 170, 173, 174, 175, 177, 181, 184, 188, 190, 191, 192, 193, 199, 200, 203, 204, 207, 208, 211, 214, 217, 218, 221, 223, 227, 231, 232, 233, 234, 236, 237, 239, 241, 244, 245, 249, 250, 1, 10, 136, 155, 172, 196, 230, 247, 2, 11, 17, 20, 25, 31, 40, 49, 70, 46, 58, 61, 96, 135, 90, 117, 167, 182, 215, 242, 113, 197, 248, 74, 77, 80, 83, 110, 140, 143, 194, 251, 94, 100, 115, 133, 149, 176, 209, 224, 145, 226, 178, 205]) [register()] sorted scores:tensor([80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6562, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6406, 80.6250, 80.6250, 80.6250, 80.6250, 80.6250, 80.6250, 80.6250, 80.6250, 80.5938, 80.5938, 80.5938, 80.5938, 80.5938, 80.5938, 80.5938, 80.5938, 80.5938, 80.5781, 80.4688, 80.4688, 80.3750, 80.3750, 80.3594, 80.3594, 80.1562, 80.1562, 80.1562, 80.1562, 80.1094, 80.1094, 80.1094, 80.0938, 80.0938, 80.0938, 80.0938, 80.0938, 80.0938, 80.0938, 80.0938, 80.0938, 80.0625, 80.0625, 80.0625, 80.0625, 79.9844, 79.9844, 79.9844, 79.9844, 79.8438, 79.8438, 79.7188, 79.7188]) ... ```

Do you have any suggestions @ammar-n-abbas?

ammar-n-abbas commented 1 week ago

@mcres can you check if the demo_data has the meshes stored in them and the file selector can read it? Can you show the file selector GUI as from the mesh selection GUI it says next object is "None" which should not be the case

mcres commented 1 week ago

@ammar-n-abbas that seems fine. Find here a video with the whole procedure: foundation_pose_rosbag.webm

Some comments:

Could you perhaps share the output of your screen as well as the log of your console when you successfully reproduce the example so we can compare it?

vmayoral commented 1 week ago

Connected to https://github.com/ammar-n-abbas/FoundationPoseROS2/issues/3

ammar-n-abbas commented 1 week ago

There was a minor bug in the code which has been fixed now, apologies for that. The demo data rosbag screen recording has now been added in the Readme along with the terminal log.

mrtnbm commented 1 week ago

There was a minor bug in the code which has been fixed now, apologies for that. The demo data rosbag screen recording has now been added in the Readme along with the terminal log.

Hey Ammar,

thank you for your work. Unfortunately, a different error comes up now:

AttributeError: 'FoundationPose' object has no attribute 'is_register'. Did you mean: 'register'? in line 307: https://github.com/ammar-n-abbas/FoundationPoseROS2/blob/6931cf0f1e1bedba3ee58569303bfb0d59d12233/foundationpose_ros_multi.py#L307

I could only find the register() function in FoundationPose estimater class. Where is the is_register function defined?

Thank you for your time, mb

ammar-n-abbas commented 1 week ago

The file (foundationpose_ros_multi.py) has been modified and the repo now includes the modified lines in estimater.py please download it from this repo's last commit

mrtnbm commented 1 week ago

The file (foundationpose_ros_multi.py) has been modified and the repo now includes the modified lines in estimater.py please download it from this repo's last commit

Thank you, that works! I eventually found out about a workaround as well after looking at the demo files from FoundationPose: https://github.com/ammar-n-abbas/FoundationPoseROS2/issues/3#issuecomment-2476408986