Open desaibhargav opened 1 year ago
How large area does the point cloud cover?
Hi, I got the same error. How did you resolve it? @desaibhargav @qinzheng93
Hi, I got the same error. How did you resolve it? @desaibhargav @qinzheng93
I also encountered the same problem, have you solved it?
Hello, I am having the same error while testing my data of 20-28k points.
From other issues I have read that one solution could be to create a new dataset.py of my data, but I don't totally understand this approach.
Did you solve the problem? And if so, could you please provide some details about it? @qinzheng93 @desaibhargav @hiyyg @W-QY
Thank you
same questions here , python experiments/geotransformer.3dmatch.stage4.gse.k3.max.oacl.stage2.sinkhorn/demo.py --src_file "" --ref_file "" --gt_file "" --weights geotransformer-3dmatch.pth.tar
(930, 3) (8973, 3) (930, 1) (8973, 1)
Traceback (most recent call last):
File "experiments/geotransformer.3dmatch.stage4.gse.k3.max.oacl.stage2.sinkhorn/demo.py", line 127, in
`
def load_data(args):
voxel_size=0.01
pcd_file_path_src='mesh_menya.pcd'
pcd_src = o3d.io.read_point_cloud(pcd_file_path_src)
pcd_src = pcd_src.voxel_down_sample(voxel_size)
pcd_file_path_target='mesh_whole.pcd'
pcd_target = o3d.io.read_point_cloud(pcd_file_path_target)
pcd_target = pcd_target.voxel_down_sample(voxel_size)
# src_points = np.load(args.src_file)
# ref_points = np.load(args.ref_file)
src_points = np.array(pcd_src.points)
ref_points =np.array( pcd_target.points )
import pdb
#pdb.set_trace()
src_feats = np.ones_like(src_points[:, :1])
ref_feats = np.ones_like(ref_points[:, :1])
print(src_points.shape,ref_points.shape,src_feats.shape,ref_feats.shape,)
data_dict = {
"ref_points": ref_points.astype(np.float32),
"src_points": src_points.astype(np.float32),
"ref_feats": ref_feats.astype(np.float32),
"src_feats": src_feats.astype(np.float32),
}
data_dict["transform"] = np.eye(4, dtype=np.float32)
if(0):
if args.gt_file is not None:
transform = np.load(args.gt_file)
data_dict["transform"] = transform.astype(np.float32)
return data_dict
def main(): parser = make_parser() args = parser.parse_args()
cfg = make_cfg()
# prepare data
data_dict = load_data(args)
neighbor_limits = [38, 36, 36, 38] # default setting in 3DMatch
neighbor_limits = [8,8,8,8]
data_dict = registration_collate_fn_stack_mode(
[data_dict], cfg.backbone.num_stages, cfg.backbone.init_voxel_size, cfg.backbone.init_radius, neighbor_limits
)
# prepare model
model = create_model(cfg).cuda()
state_dict = torch.load(args.weights)
model.load_state_dict(state_dict["model"])
# prediction
data_dict = to_cuda(data_dict)
output_dict = model(data_dict)
data_dict = release_cuda(data_dict)
output_dict = release_cuda(output_dict)
`
solved my problem ,by changing the cfg voxsize
Thank you for open sourcing this amazing work ❤️
I tried to follow the installation instructions and run it for my data using demo.py. However, I'm met with a RuntimeError of "selected index k out of range"
I'm able to load the model, so this likely comes from the forward pass (probably the SuperPoint estimation step).
I'm using the default configuration for KPConv and GeoTransformer and my point clouds are in the range of 10K points to 30K points (post downsampling). I could share the links to my files if that helps.
Best, Bhargav