Closed AI-Guru closed 5 years ago
Hi, thanks for your feedback. We are currently working on releasing the pipeline for processing and rendering of the CAD files. From this you will be able to produce raster graphics that can directly be used for Deep Learning.
Pointclouds are already available in the STL files. However, since the sampling in the STL files is not very uniform, you would need to do even sampling on the surface (with for example libigl: igl.uniformly_sample_two_manifold or trimesh: trimesh.sample.sample_surface_even). This is not ground truth anymore, so the other option would be to use the OBJ files that we provide soon, which are already uniformly sampled (or to use our pipeline, to sample the CAD models at arbitrary resolution).
The same applies for voxelgrids, there are already solutions available that generate a voxelgrid from a 3D mesh, for example binvox.
Does that help you?
Namaste!
Excellent! Approved!
One suggestion... I do not see it at first glance. It would be great if you would provide some sourcecode for: Rendering to raster graphics, mapping to pointclouds and mapping to voxelgrids. Then people could immediately feed the data into their Deep Neural Networks.
I got an extensive background in Deep Learning for 3D Geometry. And this would definitely help.