I'm having issues understanding the coordinates representation of a SparseTensor
I have an input pointcloud created with open3d and I'm performing a voxel downsample of 5. The output sparse tensor has very different coordinates from the input, why is it the case?
Some input points:
import open3d as o3d
import numpy as np
import torch
import MinkowskiEngine as ME
import math
VOXEL_SIZE = 5
def get_point_cloud(total_points: int = 5000):
side = math.ceil(total_points ** (1 / 3))
points = np.array([[x, y, z] for x in range(side) for y in range(side) for z in range(side)])
point_cloud = o3d.geometry.PointCloud()
point_cloud.points = o3d.utility.Vector3dVector(points)
return point_cloud
pointcloud_o3d = get_point_cloud()
pointcloud_me = get_point_cloud()
pointcloud_o3d = pointcloud_o3d.voxel_down_sample(voxel_size=VOXEL_SIZE)
coords = np.array(pointcloud_o3d.points)
print(coords)
features = torch.from_numpy(np.ones_like(pointcloud_me.points)).to(torch.float32)
coords_me = np.array(pointcloud_me.points)
coordinates = ME.utils.batched_coordinates([coords_me / VOXEL_SIZE], dtype=torch.float32)
coordinates, features = ME.utils.sparse_collate([coordinates], [features], dtype=torch.float32)
sparse_tensor=ME.SparseTensor(features, coordinates, device="cuda", quantization_mode=ME.SparseTensorQuantizationMode.RANDOM_SUBSAMPLE)
for row in sparse_tensor.coordinates:
print(row)
I'm having issues understanding the coordinates representation of a SparseTensor I have an input pointcloud created with open3d and I'm performing a voxel downsample of 5. The output sparse tensor has very different coordinates from the input, why is it the case? Some input points:
Some SparseTensor points:
Code to run the example: