OPEN-AIR-SUN / PQ-Transformer

80 stars 7 forks source link

Inference using my own point cloud on Scannet dataset #1

Open Wavelet303 opened 2 years ago

Wavelet303 commented 2 years ago

Hello, first of all congratulations for your incredible work. I have been able to test the results on the Scannet dataset. Currently I have accurately reconstructed indoor scenes using Bundlefusion, obtaining similar results to Scannet's 3D dataset models. In turn, the points clouds are aligned on the axes just like Scannet. How can I make the inference easily on the model using my own point cloud? That is, in what format should you save the data? Is it necessary to make a transformation of this before? Thanks for your time.

Fromandto commented 2 years ago

Generally speaking, we cannot support this kind of engineering issues. But I have asked Xiaoxue to give some tips, maybe she would answer this soon.

cxx226 commented 2 years ago

There is no more transformation needed. You can save your point clouds in any format(.npy or .ply, etc.) and load them as (N, 3) tensors. You can just comment out the loss calculation and evaluation code in eval.py (eg., line 302-316, line 320, 322, 327, 330, 332, 334, and line 344-407) and set the 'inference_switch' of function dump_results and dump_results_quad to True (line 340, 341), before running:

CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node 1 eval.py  --log_dir [log_dir] --checkpoint_path [checkpoint_path]

If you want to test on a single point cloud, you can just load the N3 tensor and feed it to the inputs* dict (line 295, eval.py) directly. And if you want to test on a large data set you'd better change the dataloader and replace scannet data with your own data. Hoping this would be helpful for you.

Wavelet303 commented 2 years ago

Thank you for your prompt and detailed response ;) . I will try to make the inference following your valuable advice

jackchinor commented 4 days ago

There is no more transformation needed. You can save your point clouds in any format(.npy or .ply, etc.) and load them as (N, 3) tensors. You can just comment out the loss calculation and evaluation code in eval.py (eg., line 302-316, line 320, 322, 327, 330, 332, 334, and line 344-407) and set the 'inference_switch' of function _dumpresults and _dump_resultsquad to True (line 340, 341), before running:

CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node 1 eval.py  --log_dir [log_dir] --checkpoint_path [checkpoint_path]

If you want to test on a single point cloud, you can just load the N*3 tensor and feed it to the inputs dict (line 295, eval.py) directly. And if you want to test on a large data set you'd better change the dataloader and replace scannet data with your own data. Hoping this would be helpful for you.

Hi, I comment out the line as you advised, but some errors happen: rank0: dump_results(end_points, os.path.join(ROOT_DIR,'dump/%01dbest'%(batch_idx)), DATASET_CONFIG)

rank0: File "/proj/users/xgqin/qxg/layout/PQ-Transformer/models/dump_helper.py", line 38, in dump_results rank0: point_clouds = end_points['point_clouds'].cpu().numpy()

E0627 18:12:00.356000 140317217981632 torch/distributed/elastic/multiprocessing/api.py:826] failed (exitcode: 1) local_rank: 0 (pid: 724317) of binary: /proj/users/xgqin/qxg/miniconda3/envs/layout/bin/python Traceback (most recent call last): File "", line 198, in _run_module_as_main File "", line 88, in _run_code File "/proj/users/xgqin/qxg/miniconda3/envs/layout/lib/python3.11/site-packages/torch/distributed/launch.py", line 198, in main()


can you give me some tips on how to figure it out ? Thanks very much