Closed RoyAmoyal closed 2 years ago
Hey,
of course this works, you can get inspired by the following examples:
Here you see how to sample objects floating in mid space as you want:
https://github.com/DLR-RM/BlenderProc/tree/main/examples/datasets/bop_object_pose_sampling
If you want to have random backgrounds checkout this:
https://github.com/DLR-RM/BlenderProc/tree/main/examples/advanced/random_backgrounds
Does this help you?
Best regards, Max
Hey,
of course this works, you can get inspired by the following examples:
Here you see how to sample objects floating in mid space as you want:
https://github.com/DLR-RM/BlenderProc/tree/main/examples/datasets/bop_object_pose_sampling
If you want to have random backgrounds checkout this:
https://github.com/DLR-RM/BlenderProc/tree/main/examples/advanced/random_backgrounds
Does this help you?
Best regards, Max
hey Max,
so If I want to create my own dataset for custom objects I have to do the next following stuff:
is that right? and it is possible to create the masks pictures too for my dataset? because a lot of the 6D Pose Estimation models require that.
if you can post an example code for even 1 object it will be great :D
thanks, Roy.
Hey,
the order seems correct.
if you can post an example code for even 1 object it will be great :D
First you load your object:
Then you set your intrinsics:
Check out the linked examples, where the code is taken from.
Best regards, Max
Thanks for the reply!
I'm currently a kind of "cheating". I put in the folder of the Linemod dataset, my own models folder with the file names as required (obj_000001 and so on) and I also plan to change the camera.json according to my camera. Do you think that might be okay? the pictures seemse fine, but I hope the scene.gt is the true poses.
For example: when I am opening a chair model in meshlab (ply):
The result: The gt of the chair: "85": [{"cam_R_m2c": [0.4658588171005249, -0.8737050294876099, -0.14005455374717712, 0.5979551076889038, 0.42751213908195496, -0.6779994964599609, 0.6522464752197266, 0.2321058064699173, 0.7215965390205383], "cam_t_m2c": [-0.22076699137687683, -0.2418719232082367, 1225.315185546875], "obj_id": 1}],
camera: "85": {"cam_K": [572.411363389757, 0.0, 325.2611083984375, 0.0, 573.5704328585578, 242.04899588216654, 0.0, 0.0, 1.0], "cam_R_w2c": [0.6234666109085083, -0.7624685764312744, -0.17300613224506378, -0.549244225025177, -0.26964253187179565, -0.790963888168335, 0.5564352869987488, 0.5881622433662415, -0.5868945717811584], "cam_t_w2c": [80.33686065673828, 47.752891540527344, 1127.77197265625], "depth_scale": 1.0},
I don't know if you can tell from that data, but if you do, does the data makes sense?
More example (on surface):
Later I will also build my own code according to the documentation without the "cheating".
Also, how do I get mask pictures? Some 6D models (like Pvnet) also require the masks.
Thank you so much!
Hey,
looks good so far.
For semantic segmentation check out this:
https://github.com/DLR-RM/BlenderProc/tree/main/examples/basics/semantic_segmentation
And for coco annotations (like bounding boxes):
https://github.com/DLR-RM/BlenderProc/tree/main/examples/advanced/coco_annotations
Best regards, Max
Hey,
looks good so far.
For semantic segmentation check out this:
https://github.com/DLR-RM/BlenderProc/tree/main/examples/basics/semantic_segmentation
And for coco annotations (like bounding boxes):
https://github.com/DLR-RM/BlenderProc/tree/main/examples/advanced/coco_annotations
Best regards, Max
Hey Max, I meant masks like that:
And another question: When I try to put the following model, unlike other models I put in, it does not work properly (I am using the bhop loader as I did for another ply files). I manage to see a dot in the center of the image that I assume is the object but for some reason it does not bring it up properly.
same for examples/basics/camera_object_pose, when I try to load my custom ply.
how can I fix that?
If you prefer I will open new issues.
check if your custom ply is in mm. If it is in meters, remove those lines in examples/basics/camera_object_pose
:
Likewise in the bop loader, set mm2m=False
check if your custom ply is in mm. If it is in meters, remove those lines in
examples/basics/camera_object_pose
:Likewise in the bop loader, set
mm2m=False
Hey, thanks for the help. It works with bop_object_pose_sampling but it doesn't work with bop_object_on_surface_sampling. I am getting the error:
In bop_object_on_surface_sampling I have to keep mm2m=True or I get this error..
Sounds strange, you should only set mm2m=False
for your own objects, not for the distractor objects because their vertices are still in mm and need to be scaled to meter. So only change this line:
Sounds strange, you should only set
mm2m=False
for your own objects, not for the distractor objects because their vertices are still in mm and need to be scaled to meter. So only change this line:
Thanks! Actually, I commented out the distractor BOP loading, and object number 1 is my own object in the linemod folder. because the bop_object_pose_sampling is working great, I suspect it is related to something else in the on_surface code.
I run the code with the path to the lm dataset: blenderproc run examples/datasets/bop_object_on_surface_sampling/main.py MyOwnPathToTheDataset lm resources/cctextures examples/datasets/bop_object_on_surface_sampling/output
The code:
import blenderproc as bproc
import argparse
import os
import numpy as np
parser = argparse.ArgumentParser()
parser.add_argument('bop_parent_path', nargs='?', help="Path to the bop datasets parent directory")
parser.add_argument('bop_dataset_name', nargs='?', help="Main BOP dataset")
parser.add_argument('cc_textures_path', nargs='?', default="resources/cctextures", help="Path to downloaded cc textures")
parser.add_argument('output_dir', nargs='?', help="Path to where the final files will be saved ")
args = parser.parse_args()
bproc.init()
# load a random sample of bop objects into the scene
#sampled_bop_objs = bproc.loader.load_bop_objs(bop_dataset_path = os.path.join(args.bop_parent_path, args.bop_dataset_name),
# mm2m = False,
# sample_objects = True,
# num_of_objs_to_sample = 1)
sampled_bop_objs = bproc.loader.load_bop_objs(bop_dataset_path = os.path.join(args.bop_parent_path, args.bop_dataset_name),
mm2m = False,
obj_ids = [1])
# load distractor bop objects
#distractor_bop_objs = bproc.loader.load_bop_objs(bop_dataset_path = os.path.join(args.bop_parent_path, 'tless'),
# model_type = 'cad',
# mm2m = True,
# sample_objects = True,
# num_of_objs_to_sample = 3)
#distractor_bop_objs += bproc.loader.load_bop_objs(bop_dataset_path = os.path.join(args.bop_parent_path, 'lm'),
# mm2m = True,
# sample_objects = True,
# num_of_objs_to_sample = 3)
# load BOP datset intrinsics
bproc.loader.load_bop_intrinsics(bop_dataset_path = os.path.join(args.bop_parent_path, args.bop_dataset_name))
# set shading and physics properties and randomize PBR materials
for j, obj in enumerate(sampled_bop_objs):
obj.set_shading_mode('auto')
mat = obj.get_materials()[0]
if obj.get_cp("bop_dataset_name") in ['itodd', 'tless']:
grey_col = np.random.uniform(0.3, 0.9)
mat.set_principled_shader_value("Base Color", [grey_col, grey_col, grey_col, 1])
mat.set_principled_shader_value("Roughness", np.random.uniform(0, 1.0))
mat.set_principled_shader_value("Specular", np.random.uniform(0, 1.0))
# create room
room_planes = [bproc.object.create_primitive('PLANE', scale=[2, 2, 1]),
bproc.object.create_primitive('PLANE', scale=[2, 2, 1], location=[0, -2, 2], rotation=[-1.570796, 0, 0]),
bproc.object.create_primitive('PLANE', scale=[2, 2, 1], location=[0, 2, 2], rotation=[1.570796, 0, 0]),
bproc.object.create_primitive('PLANE', scale=[2, 2, 1], location=[2, 0, 2], rotation=[0, -1.570796, 0]),
bproc.object.create_primitive('PLANE', scale=[2, 2, 1], location=[-2, 0, 2], rotation=[0, 1.570796, 0])]
# sample light color and strenght from ceiling
light_plane = bproc.object.create_primitive('PLANE', scale=[3, 3, 1], location=[0, 0, 10])
light_plane.set_name('light_plane')
light_plane_material = bproc.material.create('light_material')
light_plane_material.make_emissive(emission_strength=np.random.uniform(3,6),
emission_color=np.random.uniform([0.5, 0.5, 0.5, 1.0], [1.0, 1.0, 1.0, 1.0]))
light_plane.replace_materials(light_plane_material)
# sample point light on shell
light_point = bproc.types.Light()
light_point.set_energy(200)
light_point.set_color(np.random.uniform([0.5, 0.5, 0.5], [1, 1, 1]))
location = bproc.sampler.shell(center = [0, 0, 0], radius_min = 1, radius_max = 1.5,
elevation_min = 5, elevation_max = 89, uniform_volume = False)
light_point.set_location(location)
# sample CC Texture and assign to room planes
cc_textures = bproc.loader.load_ccmaterials(args.cc_textures_path)
random_cc_texture = np.random.choice(cc_textures)
for plane in room_planes:
plane.replace_materials(random_cc_texture)
# Define a function that samples the initial pose of a given object above the ground
def sample_initial_pose(obj: bproc.types.MeshObject):
obj.set_location(bproc.sampler.upper_region(objects_to_sample_on=room_planes[0:1],
min_height=1, max_height=4, face_sample_range=[0.4, 0.6]))
obj.set_rotation_euler(np.random.uniform([0, 0, 0], [0, 0, np.pi * 2]))
# Sample objects on the given surface
placed_objects = bproc.object.sample_poses_on_surface(objects_to_sample=sampled_bop_objs,
surface=room_planes[0],
sample_pose_func=sample_initial_pose,
min_distance=0.01,
max_distance=0.2)
# BVH tree used for camera obstacle checks
bop_bvh_tree = bproc.object.create_bvh_tree_multi_objects(placed_objects)
poses = 0
while poses < 10:
# Sample location
location = bproc.sampler.shell(center = [0, 0, 0],
radius_min = 0.61,
radius_max = 1.24,
elevation_min = 5,
elevation_max = 89,
uniform_volume = False)
# Determine point of interest in scene as the object closest to the mean of a subset of objects
poi = bproc.object.compute_poi(np.random.choice(placed_objects, size=10))
# Compute rotation based on vector going from location towards poi
rotation_matrix = bproc.camera.rotation_from_forward_vec(poi - location, inplane_rot=np.random.uniform(-0.7854, 0.7854))
# Add homog cam pose based on location an rotation
cam2world_matrix = bproc.math.build_transformation_mat(location, rotation_matrix)
# Check that obstacles are at least 0.3 meter away from the camera and make sure the view interesting enough
if bproc.camera.perform_obstacle_in_view_check(cam2world_matrix, {"min": 0.3}, bop_bvh_tree):
# Persist camera pose
bproc.camera.add_camera_pose(cam2world_matrix)
poses += 1
# activate depth rendering
bproc.renderer.enable_depth_output(activate_antialiasing=False)
# render the whole pipeline
data = bproc.renderer.render()
# Write data in bop format
bproc.writer.write_bop(os.path.join(args.output_dir, 'bop_data'),
dataset = args.bop_dataset_name,
depths = data["depth"],
colors = data["colors"],
color_file_format = "JPEG",
ignore_dist_thres = 10)
I am using this object: obj_000001.zip
Your object needs to be aligned to a coordinate frame to properly place it on a surface. As you can see, it is neither oriented right, nor centered and also has very strange dimensions for a milk box like 2.6m high. You can increase the permitted sampling range, i.e. max_distance, but the room is just 2mx2mx2m so you really need to shrink your milk box.
Your object needs to be aligned to a coordinate frame to properly place it on a surface. As you can see, it is neither oriented right, nor centered and also has very strange dimensions for a milk box like 2.6m high. You can increase the permitted sampling range, i.e. max_distance, but the room is just 2mx2mx2m so you really need to shrink your milk box.
DIdn't notice the size of my milk box. I did everything you told (aligned and resize) and it is working! Thanks!
Hey, I want to generate a synthetic dataset for my custom objects as they did in pvnet-rendering for the Linemod objects. I got the ply file for my object and a lot of background images like they did.
How can I use BlenderProc to generate the rendering dataset (for 6D Pose Estimation) as they did? For Example:
I am looking into your tutorials but I can't find a specific example for that, maybe you can guide me.
and for a more advanced approach, can I generate dataset with multiple objects (with occlusion) like that?
Thanks!