Closed sivaeinfo closed 2 years ago
Yes I have shared a script to get you started. https://github.com/NVlabs/Deep_Object_Pose/tree/master/scripts/nvisii_data_gen please read thoroughly. I am not sure how much more help I can provide you, if something is not clear in the process, please ask me more specific questions and I will try to clarify things in the process.
I have read the content in the link that you have shared. I have doubt in Running the scripts and adding your own 3D model.
Running the script If you downloaded everything from the previous steps, e.g., a single HDRI map and some distractors from Google scanned objects, you can run the following command:
python single_video_pybullet.py --nb_frames 1 This will generate a single frame example in output/output_example/. The image should be similar to the following:
This is an image
The script has a few controls that are exposed at the beginning of the file. Please consult single_video_pybullet.py for a complete list of parameters. The major parameters are as follows:
--spp for the number of sample per pixel, the higher it is the better quality the resulting image. --nb_frames number of images to export. --outf folder to store the data. --nb_objects the number of objects to load, this can reload the same object multiple times. --nb_distractors how many objects to add as distractors, this uses 3D models from Google scanned objects. Adding your own 3D models The script loads 3D models that are expressed in the format that was introduced by the YCB dataset. But it is fairly easy to change the script to load your own 3D model, NViSII allows you to load different format as well, not just obj files. In single_video_pybullet.py find the following code:
for i_obj in range(int(opt.nb_objects)):
toy_to_load = google_content_folder[random.randint(0,len(google_content_folder)-1)]
obj_to_load = toy_to_load + "/google_16k/textured.obj"
texture_to_load = toy_to_load + "/google_16k/texture_map_flat.png"
name = "hope_" + toy_to_load.split('/')[-2] + f"_{i_obj}"
adding_mesh_object(name,obj_to_load,texture_to_load,scale=0.01)
You can change the obj_to_load and texture_to_load to match your data format. If your file format is quite different, for example you are using a .glb file, then in the function adding_mesh_object() you will need to change the following:
if obj_to_load in mesh_loaded:
toy_mesh = mesh_loaded[obj_to_load]
else:
toy_mesh = visii.mesh.create_from_file(name,obj_to_load)
mesh_loaded[obj_to_load] = toy_mesh
visii.mesh.create_from_file is the function that is used to load the data, this can load different file format. The rest of that function also loads the right texture as well as applying a material. The function also creates a collision mesh to make the object move.
I would say what I shared is a starting point. You will need to deal with content yourself and how it gets loaded. I normally load 20 to 30 distractors and load 2 to 7 instances of the object of interest. Then I render 100 or a 1000 images, then I repeat this process to get a lot of diversity in my final dataset.
If you are looking for a more plug and play solution, you should look into the excellent work of blenderproc.
Hi, Now I can understand. I have final query for just want to check my understanding. If I will not get any suitable object model in Google scanned objects dataset. I can go for adding my own model. is it right?
What I shared are suggestions. So you do not have to use it.
Hi, You have done a great job on pose estimation. You have updated train.py. Is it possible to generate custom dataset using python script? Is it necessary to include 3D model of our own data set? What are all the way to use create custom dataset for applying DOPE? I want to use rectangular box and cylindrical rod dataset for applying dope for pose estimation. Please give some clarity in using custom dataset and way for updating the weights for our own custom dataset.