autowarefoundation / autoware.universe

https://autowarefoundation.github.io/autoware.universe/
Apache License 2.0
988 stars 637 forks source link

Update occupancy grid map frame to gain longer range visibility in the intersection #2906

Closed YoshiRi closed 4 months ago

YoshiRi commented 1 year ago

Checklist

Description

Currently, we are using base_link frame to generate occupancy grid map. The base_link is typically set at the center of the rear wheel, behind the vehicle rather than the LiDAR or driver. (See following Figure)

Therefore, the field of view of the occupancy grid map generated in the base_link is narrower than that of the driver or the sensor's viewpoint, for example, when turning right at an intersection.

image

Purpose

We need to change grid map estimation frame to achieve

Possible approaches

I think there are two possible approaches:

name figure note
current image current setting
plan A image generate occupancy grid map in other frame
plan B image More faithful method to sensor visibility

planA should be easier to implement.
planB will accurately represent the visible range of the sensor.

Since we often use only top LiDAR to sense further objects, I think using planA and set its frame to top lidar sensor will be enough.

Definition of done

[TBD]

Should be confirmed in scenarios involving right turns at intersections.

soblin commented 1 year ago

In planning modules, the main customers for occupancy grid map are:

This proposal maybe especially useful for intersection module.

My remark is that the best plan depends on the sensor configuration. For example if one attached better solid state LiDAR in front of the vehicle its field of view may be better than the top lidar to specific direction.

VRichardJP commented 1 year ago

Although it might be more tedious to implement, plan B looks like the only valid option to me: if a vehicle has multiple sensors at several places, it is because visibility is not the same everywhere. For small vehicles such as cars, the top lidar is indeed the one with the widest/farthest FOV, but when it comes to larger vehicles (minibus, shuttle, bus, truck, etc), no sensor can clearly see everything around.

miursh commented 1 year ago

I suppose plan B is more suitable for expressing sensor FoV. Although, it would be quite complicated to implement and I believe there are several design points. e.g. is isn't it difficult to determine whether out of sensor FoV or free space by only using limited FoV sensor, which both appears no return points.

taikitanaka3 commented 1 year ago

@soblin I think generate grid map only at driver frame is also ok. I don't know module which requires grid map at baselink frame.

YoshiRi commented 1 year ago

Now I agree with VRichardJP and miurush that the plan B should be the final solution. But you know, we need a lot of changes and there are also concern about computational load.

So I just implemented the plan A for the instant solution. @soblin could you check if this PR improves our planning scenarios?

stale[bot] commented 1 year ago

This pull request has been automatically marked as stale because it has not had recent activity.

idorobotics commented 1 year ago

@YoshiRi what is the current status for this issue?

YoshiRi commented 1 year ago

@idorobotics Sorry, I am Pending this matter now due to other prioritized tasks. The latest status is in DevelopmentAboutOccupancyGridMapFusion.pdf

remaining tasks

OGM fusion will be available after merging the following two PRs.

related PRs

PRs

stale[bot] commented 10 months ago

This pull request has been automatically marked as stale because it has not had recent activity.

YoshiRi commented 4 months ago

All features are merged and successfully tested. Also see related discussion: https://github.com/orgs/autowarefoundation/discussions/4158#discussioncomment-8664198.