carla-simulator / carla

Open-source simulator for autonomous driving research.
http://carla.org
MIT License
10.66k stars 3.42k forks source link

Instance aware semantic segmentation ground truth. #76

Closed felipecode closed 2 years ago

felipecode commented 6 years ago

Currently CARLA provides semantic segmentation ground truth from the cameras placed on the vehicle. This allows the user to receive a camera image where each pixel discriminates a class instead of a RGB value.

However, all instances of each class receives the same label value. It would be interesting to also have the possibility to add a instance segmentation image ( Q #74 ). With this, the user would have information both about the class and about the object ID.

This does not seem hard to implement since the server has information about all objects that it captures.

lgeo3 commented 6 years ago

I am trying to implement such feature for carla (we need it on a project), but I am not sure to know where to start. I am new to unreal engine/carla.

In my understanding, one solution is to use the CustomDepthStencil. We could set the custom depth to a value that match the objects unique id.

Problems:

An other idea is to re-paint all objects (using RGB -> thus having a lot more possibility). But in my understanding painting objects with the unique id as the color would impair the RGB camera ? What do you think ? This second solution seems to be what they are doing in unrealCV, but I am not sure that they managed to get the same scene with RGB + object segmentation.

Any help would be appreciated here.

hrdisaac commented 6 years ago

In order to support more than 255 values without modifying the engine itself, is it possible to dynamically create a transparent material per static mesh and set a custom parameter on that material and somehow access that custom parameter in the post process pass?

Or any way to make use of postprocessinput3 to postprocessinput6 (marked as not usually used by Unreal) to inject the tags?

Or create two render targets to render the same material for twice, so in the shader it can use one custom stencil value as the lower bits and the other as the higher bits, then combine them in the shader to render the final color for segmentation?

Or instead of drawing segmentation color in post process pass, find a way to draw it in other passes of the rendering pipeline?

marcgpuig commented 6 years ago

Hi @hrdisaac,

In order to support more than 255 values without modifying the engine itself, is it possible to dynamically create a transparent material per static mesh and set a custom parameter on that material and somehow access that custom parameter in the post process pass?

This will cause too many polygons in the scene, so too many draw calls, so very low FPS. Also using so many transparent materials is always bad.

Or any way to make use of postprocessinput3 to postprocessinput6 (marked as not usually used by Unreal) to inject the tags?

I have no knowledge of postprocessinput3 nor postprocessinput6, maybe @juaxix have more information?

Or create two render targets to render the same material for twice, so in the shader it can use one custom stencil value as the lower bits and the other as the higher bits, then combine them in the shader to render the final color for segmentation?

You only have one custom stencil value for each object... Isn't it?

Or instead of drawing segmentation color in post process pass, find a way to draw it in other passes of the rendering pipeline?

That's probably the best sollution but we need that "find a way" without modifying the engine :)

Thanks for your reply and your ideas. If you find another way to do it, we will be glad to hear it!

marcgpuig commented 5 years ago

I think one temporal solution could be transform the 2D pixels from the image (that belong to cars) to 3D world position and check if they are inside the 3D bounding box of a car. If true, these pixels are colored with a specific instance color. Not the most efficient way... but it does the trick ;)

Code-Gratefully commented 5 years ago

I think one temporal solution could be transform the 2D pixels from the image (that belong to cars) to 3D world position and check if they are inside the 3D bounding box of a car. If true, these pixels are colored with a specific instance color. Not the most efficient way... but it does the trick ;)

Two questions:

  1. Does carla provide functionality to translate 2D back to 3D? That would involve ray tracing to find the first hit? Otherwise it wouldn't work for overlapping (in 2D image)?

  2. Even if we do ray tracing, I guess that would probably work for cars but not pedestrians as they are more crowded and the 3D bounding box currently in Carla is an estimate. In this case, I am trying to think of a more permanent solution (using the G&B channel?) Can you please point me to the code where you used to generate the red color please? Thanks.

marcgpuig commented 5 years ago

Hi @ernestcheung!

Does carla provide functionality to translate 2D back to 3D? That would involve ray tracing to find the first hit? Otherwise it wouldn't work for overlapping (in 2D image)?

You can transform from 2D to 3D using the camera intrinsic matrix, extrinsic matrix and the depth sensor. It is already done and you can test it using the point_cloud_example.py, so you don't actually need any ray tracing. Also which problem do you see with overlapping?

Even if we do ray tracing, I guess that would probably work for cars but not pedestrians as they are more crowded and the 3D bounding box currently in Carla is an estimate. In this case, I am trying to think of a more permanent solution (using the G&B channel?)

You can have some issues with pedestrians, it's true. Maybe some pixels are out of the bounding box, because de arm movement while walking for instance. You can try to solve it using the nearest bounding box center (or entity position). This could generate artifacts with near pedestrians, but I think is worth trying. You can save the instance ID into G&B channel, in fact, that's why we are reserving them.

Can you please point me to the code where you used to generate the red color please?

The ID (red color) is generated in /Unreal/CarlaUE4/Plugins/Carla/Source/Carla/Game/Tagger.cpp, using the path of the models.

Thanks.

You're welcome :)

wlemkens commented 5 years ago

I'm also looking into a instance segmentation solution. But for a generic solution it seems to me that modifications to the core engine are inevitable. Instances of cars and pedestrians could be done with the workaround discussed above, but if you also want instance segmentation for the road markings, a large stencil buffer seems the best solution.

Maybe the current stencil buffer could be expanded, but that might cause problems with the existing functionality. I presume this feature will not of enough use for the general UE4 user base to Epic Games to include it in their core?

An other option would be to port the stencil buffer Temaran wrote before the stencil buffer was part of the UE4 core. The latest version seems to be for 4.9, so it has to be ported to 4.18.

marcgpuig commented 5 years ago

Hi @wlemkens. You are absolutely right! Also I was thinking to use the vertex painting feature. It could be nice if there is some way to set a certain color for all the vertices of the mesh to get this information from the sahder. I was thinking about this today but I'm a little bit bussy. If someone want to try it, make sure to share the results :)

JimmyLaessig commented 5 years ago

UnrealCV uses the override vertex color buffer to handle object ids. it can be overridden with the ID and possibly class label per instance of a static mesh. (Instance != InstancedStaticMesh). This, combined with a SceneCapture2D where lighting and materials are disabled, allows for efficient rendering of the ids and class labels with 32-bit precision.

1453042287 commented 5 years ago

@ernestcheung hi, is there any progress by using the G&B channel?

togaen commented 5 years ago

This would be an extremely useful feature.

monghimng commented 4 years ago

Also a very relevant feature as the vision community is transitioning toward more and more difficult task like instance segmentation

KulkarniAnirudh26 commented 4 years ago

Hi, Has anyone developed a functionality for instance segmentation of road lanes on Carla ? What I would like to have is different color for each lane type (eg. Red - Solid white, Blue - Broken yellow etc). Any pointers or guidance would be helpful.

germanros1987 commented 4 years ago

@marcgpuig @doterop for visibility:

Targeting instance segmentation ground truth for 0.9.10 (July?)

jnd77 commented 3 years ago

This would indeed be a great feature. It might be a lot to ask, but can it also handle static cars (see #2343) - am not sure static cars have an id?

Schuck84 commented 3 years ago

Will this feature come the next release? When is the next release planned? Is there some branch that maybe contains this feature?

dbersan commented 3 years ago

Instance semantic segmentation, and also (specially) 2D bounding boxes are really fundamental functionalities for self-driving car development. I believe these features should be prioritized.

Schuck84 commented 3 years ago

@germanros1987 as far as I can see there is no instance segmentation in 0.9.10 available for camera. Do you know about a branch that could be used?

mzheng27 commented 3 years ago

Hi @germanros1987 @Axel1092, does the CARLA team has any updates on the instance segmentation? Any chance it will come out in the next release? Thanks!

germanros1987 commented 3 years ago

Folks, instance segmentation is not going to be part of 0.9.12. Sorry to bring bad news...

germanros1987 commented 2 years ago

The feature is now available in CARLA 0.9.13. I am closing this issue.

zxiaomzxm commented 2 years ago

The feature is now available in CARLA 0.9.13. I am closing this issue.

Hi, I download CARLA 0.9.13, but I can't find the instance segmentation camera, where is it?

Chris1nexus commented 2 years ago

@zxiaomzxm use this as blueprint name: sensor.camera.instance_segmentation It is written in the latest carla release docs (go to https://github.com/carla-simulator/carla/releases and see the 0.9.13 section) Careful that it provides instance id in GB channel and the usual cityscapes semantic tag on the R channel (https://carla.org/2021/11/16/release-0.9.13/)