Open sah-huawei opened 2 years ago
If the Mission has a PositionalGoal
, we could also do a similar thing with it (independently of whether we're rendering the route roads): that is, render the goal region in a slightly different colour on the BEV map.
Consider doing this when Issue #1489 is addressed.
This seems like a decent approach. My assumption here is the glb
generation is because the mission is usually resolved at generation of the map geometry.
I am not sure it needs to be done that sparingly but definitely at low frequency after the first time due to long term rather than immediate need.
If/when we do this, it would be a good idea to have the overlay honour any Via
points in the mission as well, which means restricting the overlay to just the lane specified in the Via
around that offset.
Is your feature request related to a problem? Please describe. Many agents use birds-eye view (BEV) camera inputs, centered on themselves, to do local motion planning, trajectory creation and collision avoidance. However, there is no information in such images for (global) mission planning and routing. To put it simply: Agents do not (cannot) know where the are going!
Thus with our existing interfaces, more comprehensive agents must currently supplement their network inputs with a potential hodge-podge of other information types in order to also achieve mission planning/routing.
Describe the solution you'd like Produce BEV images that, in addition to an egocentric representation of the map and traffic vehicles, also show the desired route for the agent (if one has been assigned in the Scenario via a Mission plan). This can be done by creating a semi-transparent overlay on the roads that are part of its route, such that they appear in a slightly different colour as other, non-route roads. (For example, if roads are normally grey, route roads could have a golden tint to them: "follow the yellow-brick road!")
Describe alternatives you've considered Agents could use the existing
mission
field in theEgoVehicleObservation
class, which contains a python list of the road-ids of the mission's planned route. But these may be meaningless to the agent's model if it is just an image network (e.g., CNN) -- there is no easy way to coordinate the road-ids with the map.Suggested Approach First, add a boolean field called something like
include_route_overlay
(default valueFalse
) to theRGB
class inagent_interface.py
.Next, add something like
ROUTE_HIDE
to theRendererMasks
class inmasks.py
. Optionally include this in the default mask used to initialize theRGBSensor
class, depending on the value of theinclude_route_overlay
field. (Of course, for other camera types,ROUTE_HIDE
should always be set.)Update the
RoadMap
class to add a method likeRoadMap.Route.to_glb(self, at_path: str)
, i.e., add it to theRoute
class. Implement this very similar to the existingRoadMap.to_glb()
method for each of the supported map types, but of course, only include polygons for the roads in the route. (We might also consider adding a property likeglb_path
toRoadMap.Route
for use in the next step, depending on how the files are managed.)Finally, update the existing camera and (Panda 3D) rendering code in
renderer.py
, to add another node (route_overlay_np
) duringsetup()
that is hideable byROUTE_HIDE
and created from the route's GLB file.