Open guygo12345 opened 2 years ago
Hi @guygo12345,
I am also waiting for the documentation.
But as a start: the implementation is the same as with other cameras e.g. semantic segmentation. You just have to use sensor.camera.instance_segmentation
as its blueprint. And according to the release notes the output is encoded like this:
Instance semantic IDs are now available embedded in the G and B channels of the RGB output of the sensor data, alongside the standard semantic IDs in the R channel.
Best, Paniac
Thanks! I tried and managed to get the data. Still need to figure out how to parse the G an B together, and understand whether it's same as the 3d objects ids.
I'm also very curious about the topic.
Please tell me if you find out how the G&B channels are connected to the actor id.
Hi, I'm also interested, as the environment object ID is also different than the actor ID and instance ID in the GB Pixels...
Could someone give advice?
Regards
Hello, does nobody knows or can help about this issue? I think the problem is that the Actor ID for the Pixels are used from the Unreal actor registry and the the ID over PYTHON is catchend from the CARLA Actor Registry and there is no sync between them.
Could someone provide a fix for this?
Thanks
Hello, does nobody knows or can help about this issue? I think the problem is that the Actor ID for the Pixels are used from the Unreal actor registry and the the ID over PYTHON is catchend from the CARLA Actor Registry and there is no sync between them.
Could someone provide a fix for this?
Thanks
I think you're right. If I compare the code from RayCastSemanticLidar.cpp they use the unique ID from a view they got from a registry but in the Tagger.cpp file they use the actor's unique ID directly.
Hello, does nobody knows or can help about this issue? I think the problem is that the Actor ID for the Pixels are used from the Unreal actor registry and the the ID over PYTHON is catchend from the CARLA Actor Registry and there is no sync between them. Could someone provide a fix for this? Thanks
I think you're right. If I compare the code from RayCastSemanticLidar.cpp they use the unique ID from a view they got from a registry but in the Tagger.cpp file they use the actor's unique ID directly.
Agreed. I am doing a project where I try to automatically annotate spawned objects, and this sensor would be very helpful to tighten up the 3d bounding boxes into 2d. However, since there is no link to the actor ID you get from the Carla world.get_actors() function it is not very helpful
I have the same question with this, if we can't get the actor's id, we have to get every object's bounding box to match it
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
您好!我是飞哥的邮箱小飞,谢谢您的来信,他会尽快回复的,祝您快乐每一天!!
hi has someone figured out the way to connect the actor id and the instance id?
Some discussion: I figured out how to get Instance Segmenation color of vehicles
Other related issues: Instance Segmentation - Standard semantic ID only contains subset of objects
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Without the ability to map actor IDs to instance semantic IDs the value of this sensor is diminished and it is, imo, counter-intuitive that they are not the same.
Hi I want to use the new instance_segmentation sensor in carla 0.9.13, but there is no documentation for that. Can you please add it? Also - are the instance ids in the image compatible with the objects ids in the world? I would like to couple the instance in the image with the instance in 3d
Thanks,
Guy