Open liuXinGangChina opened 6 months ago
Morning Kento-san @KYabuuchi , currently we are working on creat the map for localization test, Due to the difference sensor-configuration between tire4 and autocore, I wonder whether a 2Mp120°-fov camera suit your algorithm-yabloc ?
@liuXinGangChina Good morning. 2MP and 120° FoV are sufficient for operating YabLoc. :+1: (Increasing the resolution beyond this won't bring any benefits. ) Please note that YabLoc relies not only on the camera but also on GNSS, IMU, and vehicle wheel odometry.
By the way, the link in the initial post might be incorrect. Please check it.
you can find the result here
By the way, the link in the initial post might be incorrect. Please check it.
you can find the result here
already update the link, thank you for remind
@liuXinGangChina Good morning. 2MP and 120° FoV are sufficient for operating YabLoc. 👍 (Increasing the resolution beyond this won't bring any benefits. ) Please note that YabLoc relies not only on the camera but also on GNSS, IMU, and vehicle wheel odometry.
that will be great,since our camera meet yabloc's resuirement,and we have all the other sensors you mentioned, we will continue this task
thank you
Morning,yabuuchi-san @KYabuuchi ,during the preparation of the test, we found a note that said “If the road boundary or road surface markings are not included in the Lanelet2, the estimation is likely to fail.” in the limitation notification of the code。
currently our test lanelet2 map only contain lane line of the road,can you provide some additional material or an example to tell us what formation “road boundary or road surface markings” should be like in a lanelet2 file
Hi @liuXinGangChina , "road boundary or road surface markings" includes lane lines, stop lines, crosswalks, bus stops, etc.
The figure below is a lanelet2 provided in the AWSIM tutorial, which is ideal as it contains all of crosswalk, stop lines, bus stops.
Since highways usually only has lane lines, I think your map is sufficiently compatible for YabLoc.
Hi @liuXinGangChina , "road boundary or road surface markings" includes lane lines, stop lines, crosswalks, bus stops, etc.
The figure below is a lanelet2 provided in the AWSIM tutorial, which is ideal as it contains all of crosswalk, stop lines, bus stops.
Since highways usually only has lane lines, I think your map is sufficiently compatible for YabLoc.
got it,that‘s true. there are only lane line on the high way
Hi, Yabuuchi-san @KYabuuchi . During our test on highway, There are some issue confuse me. In the image below we can see line-extract and graph-seg went well on our dataset, but we got nothing in "pf/match_image" and no lane projected to the /pf/lanelet2_overlay_image. Do you have any clue?
@liuXinGangChina Please check /localization/pose_estimator/yabloc/image_processing/projected_image
& /localization/pose_estimator/yabloc/pf/cost_map_image
are published correctly.
projected_image
is an image of the line segments and segmented results projected onto the ground.
cost_map_image
is an image of the cost map generated from lanelet2.
If projected_image
is not being published, there may be no tf from base_link to camera.
If cost_map_image
is not being published, the some of lanelet2 elements might not be loaded properly.
Edit this as necessary:
https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/localization/yabloc/ll2_decomposer.param.yaml#L3
@liuXinGangChina Please check
/localization/pose_estimator/yabloc/image_processing/projected_image
&/localization/pose_estimator/yabloc/pf/cost_map_image
are published correctly.
projected_image
is an image of the line segments and segmented results projected onto the ground.cost_map_image
is an image of the cost map generated from lanelet2.If
projected_image
is not being published, there may be no tf from base_link to camera. Ifcost_map_image
is not being published, the some of lanelet2 elements might not be loaded properly. Edit this as necessary: https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/localization/yabloc/ll2_decomposer.param.yaml#L3
Thanks for your reply @KYabuuchi , I just checked my lanelet2 file, there are only lane_thin element with sub_type dash and solid in the map, so should edit https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/localization/yabloc/ll2_decomposer.param.yaml#L3 into "only left lane_thin in the road_marking_lables" ?
@liuXinGangChina If road_marking_lables
includes lane_thin
, it is fine. There is no need to remove other elements.
/localization/pose_estimator/yabloc/pf/cost_map_image
was not being published?
@KYabuuchi , hi Yabuuchi-san. With your kindly help, now i can run the whole yabloc pipeline. but there are still some problem, when i manully or automaticlly init the first pose, particle-filter node works well and give a good-looking distribution ,but ekf's output start to move un-predictable(you can see the blue line which represent EKF-history-path in the image), in that case i can not get a right "LL2 overlay image result"
@liuXinGangChina LL2_overlay depends on /localization/pose_twist_fusion_filter/pose
. Since that topic is published by ekf_localizer, the real issue is likely with ekf_localizer or the final output of YabLoc.
Please visualize /localization/pose_estimator/yabloc/pf/pose
as shown in the image below and verify if it is correct. This is the centroid position of the yabloc particle filter.
Also, please check /localization/pose_estimator/pose_with_covariance
. It is the output of YabLoc & the input of ekf_localizer (Ideally, it would be great to visualize this topic as pose history, but it is not supported. )
@KYabuuchi Thank you for your quick reply, i visualiza the path of yabloc/pf and it looks pretty well. I list the topic and hz below, maybe that can help to find out why ekf malfunction.
@liuXinGangChina 晚上好
If the output of yabloc is correct, then there might be an issue with twist_estimator/twist_with_covariance
.
However, it is strange because the yabloc's particle filter also is using that twist to update the particles. :thinking:
Could you record the topics in a ROS bag and share it? Recording all topics might make the ROS bag too large, so it would be helpful if you could record and share the following topics for investigating the cause.
/initialpose3d
/localization/pose_estimator/pose_with_covariance
/localization/twist_estimator/twist_with_covariance
/localization/pose_twist_fusion_filter/kinematic_state
Or, less likely, maybe the covariance of twist_with_covariance is incorrect...
Or, less likely, maybe the covariance of twist_with_covariance is incorrect...
Highly likely, for now covariance is a matrix whole made with 0. By the way,regarding the covariance matrix of twist_with_covariance ,what value should i assign for linear-x and angular-z? @KYabuuchi
@liuXinGangChina You need to ensure that the diagonal elements of the matrix are always set to non-zero values.
example:
[0,0]= 0.04
[1,1]= 100000.0 # large value because we can not observe this
[2,2]= 100000.0 # large value because we can not observe this
[3,3]= 1.1492099999999998e-05
[4,4]= 1.1492099999999998e-05
[5,5]= 1.1492099999999998e-05
Other elements can be 0.
Thanks for your quick reply yabuuchi-san @KYabuuchi , After assign a value to the matrix, now ekf works well。 I notice that when using gps-enable for pose intializer ,the heading that pose intializer gives maybe wrong(because when ego is stop it is hard to estimate heading using gnss attena)。 I found that yabloc introduce a -camear-initializer , will that help the pose intializer to give a right heading while gnss only gives pose?
@liuXinGangChina When Autoware is started with the option pose_source:=yabloc
, the initial position estimation that combines GNSS and camera will automatically be activated. It uses the GNSS observation position as the initial position and determines the orientation that best matches the image with lanelet2.
If yabloc_enabled and gnss_enabled are true with the following command, then the initialization mechanism is active.
ros2 param dump /localization/util/pose_initializer
hi, yabuuchi-san @KYabuuchi 。I used the command you provide。i notice gnss_enable true,ekf_enable true,ndt false but not find the "yabloc_enabled "
@liuXinGangChina It's hard to believe. 🤔 Did you possibly miss it since yabloc_enabled appears at the very bottom? I'll also share the results of my command. If it really doesn't exist, please provide the commit hash for autoware.universe.
Thanks for your help yabuuchi-san @KYabuuchi . Now I can run the Yabloc well, but i found that lane line detect sometime fail on far away area like 10 meters far(green circle area) especially for dash lane, in that case lane line match result maybe inaccurate. What can i do with this issue , are there any parameters to tune with?
@liuXinGangChina In my experience, it's not necessary to extract all road markings within the visible range. Due to issues with extrinsic calibration and the impact of slopes, they don't contribute much to accuracy. Additionally, false positives in road markings detection negatively impact accuracy, but false negatives do not.
It would be sufficient if this blue circle range could be detected. And I think this is the limit of what YabLoc will be able to detect by adjusting parameters.
Anyway, the parameters for line segment detection are rarely adjusted, so they are hard-coded here See this documents for the definition of each parameter.
If you try to adjust the parameters, it would be convenient to start the line_segment_detector and camera with the following command.
ros2 run yabloc_image_processing yabloc_line_segment_detector_node
ros2 run v4l2_camera v4l2_camera_node --ros-args -r /image_raw:=/line_detector/input/image_raw
Goodnight yabuuchi-san @KYabuuchi , during our test i found one small issue: 1.init heading estimate (using auto init with camera and gnss) result are not accurate sometimes, it would lead to a bi-pos distribution(the particles distribute into 2 seprerate clusters) after a while,the particles automaticlly converged into one tight distribution and everything seems fine
@liuXinGangChina Good morning and thank you for reporting the issue. Honestly, the current initial position estimation in YabLoc is a very basic implementation and not the best solution. I would like to improve it, but I don't have enough time to address.
If you want to resolve it with the existing implementation, increasing angle_resolution might help.
Thanks for your reply, yabuuchi-san @KYabuuchi. The idea of camera pose initializer is brilliant,im using a cheap gps device in that case sometimes the init pose and orientation maybe wrong and that will lead to a bad estimation result.
Hi,everyone we have proven the feasibility of autoware‘s localization pipeline(camera-based)in high-way scenario
Item | Description | Additions |
---|---|---|
Test infrastructure | close test field which include high way(multi lane,ramp) | |
Test Conditions | speed range 100 km/h ~ 120 km/h | |
Ego sensor for localization |
|
|
Ground Truth | Asensing 571-INS with rtk |
Test case | Result | Additions(redline-groundtruth, blueline-ekf) | Deviation ( blue-square root, green-longitudinal, red-lateral) |
---|---|---|---|
|
Failed , estimated pose is in adjacent lane | ||
|
Successful , after ego pose converge with ground truth it stay with ground truth closely even during lane change | ||
|
Successful , after ego pose converge with ground truth it stay with ground truth closely even during lane change , however sometimes it may fail caused by wrong heading estimation | ||
|
Successful , after ego pose converge with ground truth it stay with ground truth closely even during lane change |
Autoware's Camera based localization Pipeline (Yabloc)can achieve lane-level localization accuracy in Highway scenarios. While there are still something can be improved:
@liuXinGangChina Thank you for sharing these interesting experimental results. :clap: In each graph, there is a consistent error for the first 20 seconds. Is the vehicle stopped during this period?
That's right, yabuuchi-san @KYabuuchi .
That makes sense. Thanks for explaining!
hi yabuuchi-san @KYabuuchi. I leave a message to you in discourd related to this test , can you give your feed back when you notice this.
Create a separate page to summarize the test result here
Checklist
Description
Several months ago, we have test the ndt based localization pipeline under 60 km/h, you can find the result here We also notice that autoware introduce a new localization vision based theory called "yabloc", together with test report here and here under 30km/h In the great march to achieve L4 autonomous driving, it is necessary for autoware to fill the absence of highway secenario. In order to make it happen, we plan to focous on highway localization first.
Purpose
Test autoware's current localization pipeline with highway secenario Leave comment for localization enhancement if necessary
Possible approaches
Definition of done