Closed AbBaSaMo closed 4 months ago
run through of configuration https://vimeo.com/142624091
as per this tutorial https://kapernikov.com/the-ros-robot_localization-package/ we need to instances of the package running to attain the full map -> odom -> base_link transform tree
Will need 1 efk node with world_frame set to odom_frame to get odom -> base_link this fuses all sources of data bar GPS, so continuous [imu, odom, pose, twist]
WIll then need 1 efk node with world_frame set to map_frame to get map -> odom this fuses all sources of data including GPS
differential setting should be false when fusing GPS via navsat
this launch from automatic addision hints at what to do
full file for tute here https://drive.google.com/drive/folders/1NSJhg-omJ9IPQStsJ7scKiCSropqwM3Z
Further notes
Current todo
When the pkg is launched, we need to set the datum via a service. So we'll need a node that reads gps values and calls this service ONCE and in ENU standard where a 0 heading is east
So with regards to topic mapping, im keeping the original topic names but because navsat transform node subs to specific names, I need to remap in its launch
So nav sat will have the following remaps
This is what the navsatfix topic looks like
This is what twistwithcovariancestamped looks like
This is what imu look like
Keeping in mind fix is turned into pose and that it ultimately gives positional info, we'd use the position part of the pose and not orientation
Twist is not used by efk node but here's the info anyway. Maybe there's some way of converting twist to pose. I think the swiftnav driver did provide pose but i did not see it mentioned in the readme.
IMU does not seem to provide orientation data as it remains the same regardless of how i move it around but it does provide angular velocity and linear acceleration data.
@Akul-Saigal @Christina1508 keep in mind, the swift piksi must be placed in the right orientation within the vehicle. I can't find the docs that mentioned this but from playing with it rn with the ros node running, X axis is horizontal across the image and Y axis is vertical with Z axis going into and out of the screen.
Moving it from the screen to you is positive and down into the screen is negative in the Z axis. Moving it vertically up the screen is negative and vertically down the screen is positive in the Y axis Moving it right across the screen is negative in the X axis and moving it left across the screen is positive.
Update: no transform tree is published. The nodes are subscribed and publishers of the right topics though
Based on this http://docs.ros.org/en/melodic/api/robot_localization/html/configuring_robot_localization.html it's because there needs to be a transform between the sensor and the base_link_frame. WIll try static publishing where all data atm is in the frame called swiftnav-gnss
The above worked
So to summarise
@dylan-gonzalez @AnthonyZhOon just confirming, we run joint state publisher with an appropriate urdf to get transforms between sensors and base_link even when running outside of sim?
you usually write a transform publisher node or run ros2 rf2 static_transform publisher with measured configs out of sim
For own self as future reference
view frames with
ros2 run tf2_tools view_frames
arbitrary transform with
ros2 run tf2_ros static_transform_publisher 2 3 1 0 0 0 swiftnav-gnss base_link
Closing issue now as it works in the main repo with esda_launch.py
Dependency with https://github.com/Monash-Connected-Autonomous-Vehicle/ESDA/issues/22 and https://github.com/Monash-Connected-Autonomous-Vehicle/ESDA/issues/23