Closed kirbiyik closed 3 years ago
Hi @kirbiyik
[blensor_bin] [blender_path] -P [script_path]
, omitting the '-b' parameter. You may also need to remove the last line bpy.ops.wm.quit_blender()
of the script. It might also be OS-related, e.g. if you have very long paths.num_scans_per_mesh_max
.Thanks for the tips, I'll look into the renaming problem later on. Meanwhile let's keep this issue closed.
Thanks for the detailed code @ErlerPhilipp. I'm trying to train the vanilla network on Shapenet. I have encountered a lot of problems. Could you comment on them? Also please let me know if you have further hints and points that I should be careful.
I put meshes into
00_base_meshes
and run themake_dataset.py
. Following problems occurLNames of some meshes changes which then throws error on training. For instance out of 6778 samples I have 6678 in
04_pts
and 6767 in05_query_pts
. Some samples have different filenames, ignoring file extensions. Where did they come from? I could only keep common files intrainset.txt
at the cost of throwing a lot of samples away but I want to know why this is happening. Maybe something to do with Blensor?I see that this work requires meshes to be watertight, but
trimesh.fill_holes()
can't fill the samples so I am usinghttps://github.com/hjwdzh/Manifold
. Do you think output of this repo works with this implementation? Can you also comment on why your work assumes watertight meshes?I've waited
make_dataset.py
for 10 hours then realized it's not utilizing CPU anymore. I've checked05_query_pts
it had same number of samples with 00_meshes, then I've manually produced txt files. Any idea why this is happening? For 6K samples, I get 118255 files in04_pcd
. Is this number too big? Should I change any parameters likegrid_resolution, num_scans_per_mesh_max
?