lardemua / atom

Calibration tools for multi-sensor, multi-modal robotic systems
GNU General Public License v3.0
247 stars 27 forks source link

Hand-eye static visualization #115

Closed miguelriemoliveira closed 4 years ago

miguelriemoliveira commented 4 years ago

The idea is to show all robot poses at the same time ... using serveral robot descriptions of a custom rviz marker with a mesh ...

miguelriemoliveira commented 4 years ago

@eupedrosa is doing the dynamic motion visualization by publishing the joint values to make the robot move ... #116.

miguelriemoliveira commented 4 years ago

Will start working on this ...

miguelriemoliveira commented 4 years ago

Hi @eupedrosa , can you tell me what stuff needs to be installed to use the ur10e?

I will need the urdf and the 3d models for this issue ...

eupedrosa commented 4 years ago

I updated the repo with configuration files for the ur10e. For the ur10e main description file you must checkout https://github.com/iris-ua/iris_ur10e.git

miguelriemoliveira commented 4 years ago

Thanks. I will look into it.

miguelriemoliveira commented 4 years ago

Moving forward. Am able to draw the robot at each collection ...

image

image

image

Still have a lot of bugs, wil try to clean up tomorrow.

eupedrosa commented 4 years ago

Looks nice. We can use this visualization to depict the hand-eye problem.

miguelriemoliveira commented 4 years ago

Ok, I think all issues are solved. Will arrange the code better once I have a chance but for now it is functional.

A nice bonus, we can also produce images with colors associated with each collection, so that we can refer to the figures you've shown in #103. Check it out.

image

image

image

Without alfa

image

eupedrosa commented 4 years ago

:+1:

miguelriemoliveira commented 4 years ago

Hi @eupedrosa ,

I am done with adding visualization and a refactoring suggestion to the hand in eye code. I changed a lot of stuff but most of them are just suggestions. We could meet some time next week and talk about them and, if you want, we can roll back.

Also, that talk could decide about #55 and #100 .

Hand in eye code can visualize the robot in all collections, as well as see the images for each camera.

Can you test it? You will need an additional repository (don't forget to install)

https://github.com/miguelriemoliveira/rospy_urdf_to_rviz_converter

Here's a video of what we can see:

https://youtu.be/NTdRuf9tFjo

eupedrosa commented 4 years ago

This look very nice. However, something in the camera poses that do not seem right to me. Although, the end pose looks what I was expecting. I will do some tests to make sure everything is OK..

Nonetheless great visualization :+1: We will definitely use this....

eupedrosa commented 4 years ago

@miguelriemoliveira, what is the procedure to view the ros visualization?

miguelriemoliveira commented 4 years ago

Hi @eupedrosa ,

thanks. To run the code with visualization I first launch rviz (which launches the roscore as well)

roslaunch interactive_calibration eye_calibration_view_optimization.launch rviz:=true

Then, from OptimizationUtils we run:

clear && test/hand_in_eye/eye_to_base_calib_miguel.py -json ~/datasets/eye_in_hand10/data_collected.json -csf "lambda x: int(x) <50" -rv -si

Note the new flags -rv (--ros_visualization) which is required to publish ros stuff. If you also want to see the generic matplotlib optimization graphs with add -vo (--view_optimization).

The -si flag (--show_images) makes the code publish the annotated images which can be seen in rviz.

eupedrosa commented 4 years ago

It works :+1:

You can now find a model of the pattern at https://github.com/lardemua/AtlasCarCalibration/tree/master/interactive_calibration/meshes

miguelriemoliveira commented 4 years ago

Here we have a charuco pattern ...

image

miguelriemoliveira commented 4 years ago

This is complete. Closing.

eupedrosa commented 4 years ago

More data eye_to_base_2.zip