Pull Request from Esri's AI Prototypes Team. Seeking to implement multitexture support for OBJ WITH point sampling direct from OBJ data structures AND WITH support for vertices in real-world projected spatial coordinates. This PR includes all features that exist in https://github.com/facebookresearch/pytorch3d/pull/1572 and in #1573. Each PR is split into its major components due to the orthogonal nature of each branch.
We leverage PyTorch3D quite a bit on our applied research in mesh segmentation and often use obj datasets that depict entire cities or entire regions. They mostly all require multiple texture files and we eventually need to subset meshes for nearly all stages of the pipeline. In addition, we are experimenting with feature extraction methods that generally involve using point cloud based techniques - such approaches require sampling at least one point per face and linking all sampled points to their origin faces. These features and more are implemented in this branch. Further, many of our input meshes have XYZ vertices that exist in real space. As a result, depending on the location on earth), the coordinates for vertices may be so high that they are rounded if the verts data struct is only fp.32. To accommodate verts in fp.64, we modified several data structures to include OBJ, Meshes, and Transforms3D.
We've written a story about the full scope of changes at GeoAI/PyTorch3D.
Key Excerpt from our article:
multitexture-obj-high-precision: Adds support to pytorch3d.io.obj_io and pytorch3d.ops.sample_points_from_obj by reading obj vertices with floating point precision 64, i.e., double tensors. Although PyTorch3D currently allows one to control the output of decimal place, if vertex coordinates of a mesh in an obj are based on real-world coordinates, there is a chance that vertex values lose significant numeric precision. In practice, this means that vertex coordinates of mesh faces could be offset by up to a meter or more. Linked to issue #1570.
Pull Request from Esri's AI Prototypes Team. Seeking to implement multitexture support for OBJ WITH point sampling direct from OBJ data structures AND WITH support for vertices in real-world projected spatial coordinates. This PR includes all features that exist in https://github.com/facebookresearch/pytorch3d/pull/1572 and in #1573. Each PR is split into its major components due to the orthogonal nature of each branch.
We leverage PyTorch3D quite a bit on our applied research in mesh segmentation and often use obj datasets that depict entire cities or entire regions. They mostly all require multiple texture files and we eventually need to subset meshes for nearly all stages of the pipeline. In addition, we are experimenting with feature extraction methods that generally involve using point cloud based techniques - such approaches require sampling at least one point per face and linking all sampled points to their origin faces. These features and more are implemented in this branch. Further, many of our input meshes have XYZ vertices that exist in real space. As a result, depending on the location on earth), the coordinates for vertices may be so high that they are rounded if the verts data struct is only fp.32. To accommodate verts in fp.64, we modified several data structures to include OBJ, Meshes, and Transforms3D.
We've written a story about the full scope of changes at GeoAI/PyTorch3D.
Key Excerpt from our article:
multitexture-obj-high-precision: Adds support to pytorch3d.io.obj_io and pytorch3d.ops.sample_points_from_obj by reading obj vertices with floating point precision 64, i.e., double tensors. Although PyTorch3D currently allows one to control the output of decimal place, if vertex coordinates of a mesh in an obj are based on real-world coordinates, there is a chance that vertex values lose significant numeric precision. In practice, this means that vertex coordinates of mesh faces could be offset by up to a meter or more. Linked to issue #1570.
Issues Addressed Include: https://github.com/facebookresearch/pytorch3d/issues/1570.