DLR-RM / BlenderProc

A procedural Blender pipeline for photorealistic training image generation
GNU General Public License v3.0
2.83k stars 452 forks source link

Calculation of the percentage of obscured #945

Open TheIOne opened 1 year ago

TheIOne commented 1 year ago

Describe the issue

I want to get a target detection dataset for an industrial scene based on blenderproc rendering. In an industrial scene, there is a strong relationship between the degree of occlusion of an object and the capture priority. So I want to get the occlusion ratio of each object while rendering coco labels (occlusion ratio = object visible mask / object original mask). My current solution is to use bop_toolkit to compute the object's projection mask based on the target's bitmap, and then compute the target's visible mask, but this way I need to output the bitmap of each model when rendering as well, which is obviously inefficient. Do you have any good suggestions or ways to implement this please?

Minimal code example

No response

Files required to run the code

No response

Expected behavior

Quickly get the proportion of each object that is occluded in the camera view after physical simulation

BlenderProc version

v2.4.1

cornerfarmer commented 1 year ago

If you are okay with getting an approximate solution, you could use ray casts. Similar to how its done in the camera utility https://github.com/DLR-RM/BlenderProc/blob/38ea43f2b0b94e7602121100420f8ba406c49f98/blenderproc/python/camera/CameraValidation.py#L86, you send rays from the camera into its view. You do this twice, once using an bvh_tree containing all objects and once using an bvh_tree only containing your target object. Then compare at which distance the corresponding rays hit. If they hit at the same distance, it means the object is visible at that point, but if the first one hits earlier, it means there is another object in between.

ccteaher commented 1 year ago

Hello, could you tell me the specific solution?

I am looking forward to your reply

thank you