leggedrobotics / viplanner

ViPlanner: Visual Semantic Imperative Learning for Local Navigation
https://leggedrobotics.github.io/viplanner.github.io/
Other
226 stars 25 forks source link

How to modify the simulation map #16

Open Ruihyw opened 2 weeks ago

Ruihyw commented 2 weeks ago

Hi!, thank you for the great work of your team! I have reconstructed the cost map of Carla Town7 and trained the model.But i don't know how to test the model. So,how can i modify the 'carla.config' to achieve the effect of town1 in the demo program 'viplanner_demo.py' looking for ur reply,sincerely

pascal-roth commented 2 weeks ago

Hi, Thanks for using our work. Follow the steps:

  1. Export the town to USD (see the Omniverse unreal engine plugin).
  2. Once you have the USD, update the path in the config and adjust the start and goal points of the planner.
  3. Furthermore, for the town, you have to map the meshes to semantic classes. This can be done by defining a YAML file similar to the one for town 01. The meshes are then traversed by their name, and if the name fits any of the defined string sections, the class is assigned.

Once you have done these steps, you can replicate the demo. If there are any more questions, please let us know.

Ruihyw commented 2 weeks ago

Hi, Thanks for using our work. Follow the steps:

1. Export the town to USD (see the Omniverse unreal engine plugin).

2. Once you have the USD, update the path in the config and adjust the start and goal points of the planner.

3. Furthermore, for the town,  you have to map the meshes to semantic classes. This can be done by defining a YAML file similar to the one for town 01. The meshes are then traversed by their name, and if the name fits any of the defined string sections, the class is assigned.

Once you have done these steps, you can replicate the demo. If there are any more questions, please let us know.

Thank you very much!i'll try it

Ruihyw commented 2 weeks ago

image sry to bother you again,i still have some question: 1.when I reconstruct the environment,i found the "cloud.ply' is less than 3MB size,looks like the pic above ,is this correct? 2.I changed the color rgb value of each type in the "viplanner_sem_meta.py" file to match my semantic data in "Tow7",but,when I run the "cost_builder.py",only less than 1%of the points match,and the generated cost map is strange. Screenshot from 2024-06-21 16-57-38

Ruihyw commented 2 weeks ago

So,if i want to define some new objects' cost, what should i do? looking for ur reply!

Ruihyw commented 2 weeks ago

image this is the cost-map

Ruihyw commented 2 weeks ago

I just changed the "VIPLANNER_SEM_META" and the "cls_order" `VIPLANNER_SEM_META = [

TRAVERSABLE SPACE

# traversable intended
{
    "name": "sidewalk",#check
    "loss": TRAVERSABLE_INTENDED_LOSS,
    "color": [244, 35, 232],
    "ground": True,
},
{
    "name": "crosswalk",#check
    "loss": TRAVERSABLE_INTENDED_LOSS,
    "color": [157, 234, 50],
    "ground": True,
},
{
    "name": "sand",#check
    "loss": TRAVERSABLE_INTENDED_LOSS,
    "color": [152, 251, 152],
    "ground": True,
},
{
    "name": "road",#check
    "loss": TRAVERSABLE_INTENDED_LOSS,
    "color": [128, 64, 128],
    "ground": True,
},
# traversable not intended

{
    "name": "terrain",
    "color": [255, 255, 0],
    "loss": TERRAIN_LOSS,
    "ground": True,
},

# OBSTACLES ###
# human
{
    "name": "person",#check
    "color": [255, 0, 0],
    "loss": OBSTACLE_LOSS,
    "ground": False,
},
{
    "name": "anymal",#truck check
    "color": [0, 0, 70],
    "loss": OBSTACLE_LOSS,
    "ground": False,
},
# vehicle
{
    "name": "vehicle",#check
    "color": [0, 0, 142],
    "loss": OBSTACLE_LOSS,  
    "ground": False,
},
{
    "name": "motorcycle",#check
    "color": [0, 0, 230],
    "loss": OBSTACLE_LOSS,
    "ground": False,
},
{
    "name": "bicycle",#check
    "color": [119, 11, 32],
    "loss": OBSTACLE_LOSS,
    "ground": False,
},
# construction
{
    "name": "building",#check
    "loss": OBSTACLE_LOSS,
    "color": [70, 70, 70],
    "ground": False,
},
{
    "name": "bridge",  #barn check
    "loss": OBSTACLE_LOSS,
    "color": [170, 120, 50],
    "ground": False,
},
{
    "name": "fence",#check
    "color": [190, 153, 153],
    "loss": OBSTACLE_LOSS,
    "ground": False,
},
# object
{
    "name": "pole",#check
    "color": [153, 153, 153],
    "loss": OBSTACLE_LOSS,
    "ground": False,
},
{
    "name": "traffic_sign",#check
    "color": [220, 220, 0],
    "loss": OBSTACLE_LOSS,
    "ground": False,
},
{
    "name": "traffic_light", #check
    "color": [250 , 170, 30],
    "loss": OBSTACLE_LOSS,
    "ground": False,
},
{
    "name": "bench",#ware  check
    "color": [55, 90, 80],
    "loss": OBSTACLE_LOSS,
    "ground": False,
},
# nature
{
    "name": "vegetation",#check
    "color": [107, 142, 35],
    "loss": TERRAIN_LOSS,
    "ground": False,
},
{
    "name": "water_surface",#  check
    "color": [45, 60, 150],
    "loss": OBSTACLE_LOSS,
    "ground": True,
},
# sky
{
    "name": "sky",#check
    "color": [70, 130, 180],
    "loss": OBSTACLE_LOSS,
    "ground": False,
},
{
    "name": "background",#mountain check
    "color": [55, 90, 80],
    "loss": OBSTACLE_LOSS,
    "ground": False,
},
# void outdoor
{
    "name": "dynamic",
    "color": [32, 0, 32],
    "loss": OBSTACLE_LOSS,
    "ground": False,
},
{
    "name": "static",  # also everything unknown
    "color": [0, 0, 0],
    "loss": OBSTACLE_LOSS,
    "ground": False,
},

]

cls_order = [ ["sky", "background","dynamic", "static"], [ "building", "fence", "water_surface", "bridge" ], # [ "pole", "traffic_light", "traffic_sign", "bench" ], [ "terrain", "vegetation"], ["sidewalk", "crosswalk","sand","road"], ["person", "anymal", "vehicle", "motorcycle", "bicycle"], ] `

pascal-roth commented 2 weeks ago

Hi, no the reconstruction does not look correct. The mesh should be nicely recognizable. Do the rendered images look as expected, and do the poses make sense? With the given environment reconstruction (basically just the lines), the strange costmap makes sense as it can only recognize these lines and fill the rest of the space.

I would suggest double-checking how the images were created and that they are generated densely over the entire map so that the reconstruction results look like the mesh.

Ruihyw commented 2 weeks ago

Hi, no the reconstruction does not look correct. The mesh should be nicely recognizable. Do the rendered images look as expected, and do the poses make sense? With the given environment reconstruction (basically just the lines), the strange costmap makes sense as it can only recognize these lines and fill the rest of the space.

I would suggest double-checking how the images were created and that they are generated densely over the entire map so that the reconstruction results look like the mesh.

Thank you very much for your patient reply! I still have two questions 1.I placed a self-driving vehicle inside the Carla UE4 to get the dataset, which means that the vehicle will only drive along the road and take pictures, could this be the reason why the reconstruction fails? Also. 2.I noticed that the reconstructed lines are actually blue sky and some green vegetation, does this mean that the reconstruction has something to do with the settings related to semantics, and if so what modifications do I need to make?

pascal-roth commented 2 weeks ago

Did you adjust all the camera parameters for your recorded data? If they don't fit, then it is possible that your reconstruction does not give meaningful results. I would propose to export the assets to omniverse and then do the data collection there with the defined sensor parameters.

Ruihyw commented 2 weeks ago

Did you adjust all the camera parameters for your recorded data? If they don't fit, then it is possible that your reconstruction does not give meaningful results. I would propose to export the assets to omniverse and then do the data collection there with the defined sensor parameters.

yes,I get the internal and external parameters of the camera and store them in the correct paths, Screenshot from 2024-06-24 01-51-45 Screenshot from 2024-06-24 01-54-42 Screenshot from 2024-06-24 01-56-31

pascal-roth commented 1 week ago

I don't know how it fails then. I would propose to take two depth cameras close to each other and do the reconstruction to check if that looks correct.

Ruihyw commented 1 week ago

I don't know how it fails then. I would propose to take two depth cameras close to each other and do the reconstruction to check if that looks correct.

Hi!Thank you for your patience!I solved the problem of 3D reconstruction with only trajectories,I changed this line of code image

to: img_array = cv2.imread(img_path,cv2.IMREAD_ANYDEPTH) And then the reconstruction results have content but are still wrong, could it be due to the scale factor?

image

And considering that the reconstruction results are still bad, could it be because the two neighboring images I used are only 0.02s apart, should I extend the sampling time image Looking forward to your reply!