carla-simulator / carla

Open-source simulator for autonomous driving research.
http://carla.org
MIT License
11.22k stars 3.63k forks source link

[Carla 0.9.1] The actors' models are simplified in lidar camera? #1010

Closed bigsheep2018 closed 5 years ago

bigsheep2018 commented 5 years ago

Hello,

As shown in the following two pics, which are exported from the same scene. image image

It seems the car model (nearby the bike) is simplified as several boxes and the bike is simplified as a single box in the lidar camera's view. Is there any method that could provide more realistic lidar points in Carla 0.9.1?

Thanks in advance.

nsubiron commented 5 years ago

Hi @bigsheep2018, thanks for reporting. Yes, this is a known issue, #423.

bigsheep2018 commented 5 years ago

Hello @nsubiron, Thanks for referring to #423 . I have checked the issue #423 and tried to modify the physical Asset. For instance, I modified the Audi A2's physical Asset: Content/Carla/Blueprints/Vehicles/AudiA2/Vh_Car_AudiA2_PhysicsAsset

Tools tab -> Primitive type: Multi Convex Hull () Thus, I get a more accurate lidar point cloud of the AudiA2 car (the center, near the small box). image

My question is: is it possible to get a even more accurate body by maybe adding more convex hull or other settings in Carlla or UE4 (Multi-Convex hull might not be accurate enough as the car model may not be a convex shape) ?

Thanks for your patience.

nsubiron commented 5 years ago

Hi @bigsheep2018,

Ideally we would like to detect the mesh instead of the physics asset, but we haven't found how to do it. Making the physics assets more complex will make all the physics computations heavier.

And about how to change the PhysAsset maybe someone from @carla-simulator/art knows an easier way?

TheNihilisticRobot commented 5 years ago

Hi @bigsheep2018

I Haven't found a more appealing way to create a shape accurate physical mesh than the one you mention. Static meshes have the option to use their geometry as collision or to import a collision mesh created on other modelling software but PhysicAssets are not really meant to have that kind of precision (As @nsubiron said, probably because of performance cost).

I think the option that would give you the best results would be to recreate the vehicle's body using the simple bodies Unreal uses for collisions, or maybe a combination of automatically generated convex shapes and manually created details. You can adjust some parameters like max number of bodies and max number of vertex when creating convex bodies (Those options are hidden inside the Tools tab). You can modify the created bodies as any other model and discard them if they don't convince you. But physic asset editor is really limited and getting a perfect recreation of the model using either of those methods would require quite a chunk of working time (But is possible).

That being said, be careful when modifying the physic asset yourself, the vehicle blueprint highly relies on it to function and you might need to rearrange things like mass and centre of mass in there for it to work properly.

bigsheep2018 commented 5 years ago

@nsubiron @TheNihilisticRobot @carla-simulator/art Thanks for the reply. I find this video, but I have not try it yet because I do not have any 3D design software on hand. Could you please check if it is possible to get a better PhysicAsset in Carla + UE4? If yes, at least, we can import customized model with more accurate collision (physical) skeleton mesh

TheNihilisticRobot commented 5 years ago

Well, There's something I didn't know.

I'll have to test that option myself but it seems promising. I'll have to check if that setup works with Unreal's vehicle system, what precision we could achieve before the engine starts to slow down and balance the cost of implementing custom, complex, hand made collisions with how far we are of using the visibility channel for the Lidar.

Two small things I've noticed:

analog-cbarber commented 5 years ago

Instead of using the lidar sensor, you can use the depth camera sensor, which gives the actual z-depth to the mesh at each pixel, as the basis for generating a point cloud. If you need the full 360 degree pointcloud, you would need to create multiple depth cameras to cover the full range, but many applications can get away with just the front view.

bigsheep2018 commented 5 years ago

Hello @TheNihilisticRobot,

I have noticed it too. I guess there would be a couple of people checking carla with similar situations:

It would helps a lot if the user can balance the model complexity and the game performance himself. Thank you very much for checking it and looking forward to your reply.

bigsheep2018 commented 5 years ago

Hello @analog-cbarber, Thanks for your advice. I would try the depth camera later. The lidar's simulation seems much more important in my situation as I need to test my model with a real lidar device. Thus, the quality or realness of the lidar data is essential.

analog-cbarber commented 5 years ago

If you care about quality, then you will also want one based on actual mesh distances. You will also want a realistic intensity and noise model for the LIDAR.

bigsheep2018 commented 5 years ago

@analog-cbarber That is true. I am currently working on creating a new lidar sensor based on clarla platform with my own designed intensity simulation (color, material, distance, angle between laser and vertex norm, etc), a noise model, and maybe multiple lidar devices.

bigsheep2018 commented 5 years ago

Hello @TheNihilisticRobot , Have you try that link? May I know if it is possible to import customized mesh for lidar sensing?

Thanks in advance.

barbierParis commented 5 years ago

@analog-cbarber That is true. I am currently working on creating a new lidar sensor based on clarla platform with my own designed intensity simulation (color, material, distance, angle between laser and vertex norm, etc), a noise model, and maybe multiple lidar devices.

Hello @bigsheep2018 , I wanted to know if you made any progress on your new lidar model ? and if you would eventually be opensourcing your sensor ?

bigsheep2012 commented 5 years ago

Hello @barbierParis,

The physx engine built in UE4 can return basically everything you want for calculating custom params (color, material, distance, norm, etc) at the hit point. There is no complicated stuff after you look into RayCastLidar.cpp. My implementation is paused as the collision mesh is simplified as several boxes and the rendering would be extremely slow if you add more computation to the ray cast lidar sensor as it is cpu based.

barbierParis commented 5 years ago

okay thanks for the input !

barbierParis commented 5 years ago

Hey @bigsheep2018 ,

I have some follow up questions, and you seem to have a deep understanding of the subject. I would like to get some annotated lidar data. I found this forked version of CARLA (Link) which does the job. It was forked from the stable release, so it seems that it uses the old Python API. My question is how can I port this to the latest version ? It seems to me that it's a little more complicated than just a copy paste from the Lidar C++ file.

bigsheep2018 commented 5 years ago

Hello @barbierParis ,

By mentioning stable release, are you meaning Carla version < 0.9.0? If yes, I am afraid I am not able to help. According to what I know, the implementation of Carla <0.9.0 is totally different from Carla >= 0.9.0 though I do not check those old versions. I do not think you need to migrate between these versions. The PythonAPI in newer version should be able to help provide enough labels. You can code in the python side to build custom labels.

You cannot just copy&paste files directly in a Unreal project. If you insist doing this, you may need to check Unreal's official tutorials.

If you are familiar with Unity3D and C# and windows, You can try LG-simulator as well. It has real collision mesh but similar cpu based slow lidar sensor. If you are comfortable with ROS and not willing to change the default lidar setup, you can basically get all the output including annotations from their ros package and try to build your own label offline.

barbierParis commented 5 years ago

Thanks for taking the time to answer @bigsheep2018 . LG-simulator does sound interesting but I'm not familiar with Unity or C#.

Yes ! Stable release is at version 8.x.

I don't really want to copy paste anything. I'm just trying to figure out how to get semantic labels (road, car, pedestrian, ...) for each point from the Lidar sensor point cloud. I've checked the doc, and it seems that labels aren't provided out of the box (for the lidar sensor at least). So I'm a bit lost on how to get these. In the version 8.x, you had to modify some C++ and Python files to get the labels. But I'm a bit clueless on the way to get the labels in versions 9.x. Do you have any idea ?

bigsheep2012 commented 5 years ago

Hello @barbierParis,

For point wise semantic labels, you may need to code by yourself. The lidar data obtained from Carla do not contain any semantic labels. In my opinion, the most straight forward way is to extract all annotations from Carla and check if any point is belonging to any objects.

Or, you can check the semantic sensor's c++ implementation that I think it might be helpful.

botcs commented 4 years ago

I think this is a beautiful workaround, that can be also used to assign semantic segmentation labels: http://webcache.googleusercontent.com/search?q=cache:RvQdQL0XdBEJ:www.chikki.se/blog/+&cd=1&hl=en&ct=clnk&gl=hu

Akash-Kumbar commented 10 months ago

I think this is a beautiful workaround, that can be also used to assign semantic segmentation labels: http://webcache.googleusercontent.com/search?q=cache:RvQdQL0XdBEJ:www.chikki.se/blog/+&cd=1&hl=en&ct=clnk&gl=hu

@botcs Hey can you share this link again?