Closed juliaschatz closed 4 years ago
It needs to work. Chiefly, you need to calibrate your cameras, make a demo launch file for it, integrate it (somehow) into the repo or workspace, and address issues.
Last year, got correct poses.
Current status: Switched to new aruco version (possibly untested)
TODO - verify new version:
Thoughts: The aruco_localization pkg seems to work, so as long as the supporting library is installed (either manually or opencv?) For the frame, remember that the messages specify a 3D pose in camera frame, which should, b/c of the urdf and robot_state_publisher node, be transformed into the robot frame, and then, because we're using it as a relative position estimator, not a global one, the inverse of that (opposite direction) is where the robot is in the world frame (pose wise). As for the trough, just get something printed out asap and fine-tune later. Who is working on this?
I'm working on this. Last Monday I verified that the node works. I'm going to work on getting it published in the robot frame.
All you have to do, in the msg
, is write frame: <camera_name>_frame
Everything else will be handled automatically.
So it looks like the aruco_localization node publishes the pose of the marker board with respect to the camera as a PoseStamped under the "estimate" topic. The frame_id of this message is "camera".
Where are the frame transforms defined (that is, how does ROS know what each frame is relative to the other ones) and how do we apply them (eg, transform the pose of the marker board relative to the camera into the pose of the robot relative to the marker board)?
They can be defined in the launch files (doc here) like: "static_transform_publisher x y z yaw pitch roll frame_id child_frame_id period_in_ms"
So, we'd have this between the camera and robot frame because it's fixed.
Tf_tree is useful for debugging. There might be different ways to do this, but this is what I've used. @JWCS might know best practices better.
Testing makes this actually look like it produces the camera pose. We still need to transform the camera pose into the robot pose, and also the aruco marker map pose into the world pose.
https://github.com/GOFIRST-Robotics/NASA-RMC-2020-NorthstarRobotics/tree/aruco_odom produces poses of the robot in the "aruco" frame. Just needs testing.
ralfe45 evens352
@MTlachac What needs to be done here?