AllenXiangX / SnowflakeNet

(TPAMI 2023) Snowflake Point Deconvolution for Point Cloud Completion and Generation with Skip-Transformer
MIT License
149 stars 16 forks source link

How to obtain point cloud files in point cloud completion testing #21

Open yizhiboyan opened 1 year ago

yizhiboyan commented 1 year ago

Thank you for your excellent work! I am a beginner who trained the point cloud completion task using your code and obtained the checkpoint and log files in the exp folder. How can I obtain the completed point cloud file (. pcd) generated during testing? Looking forward to your reply.

AllenXiangX commented 1 year ago

Hi, You can use the Imdb reader of GRNet to get the .pcd files from the PCN dataset.

yizhiboyan commented 1 year ago

Thank you for your patient reply. I will give it a try,

huyanbi commented 1 year ago

@yizhiboyan ,Hello, I understand from your answer that you have successfully reproduced the point cloud completion. Have you encountered this problem?Loaded compiled 3D CUDA chamfer distance 0%| | 0/906 [00:03<?, ?it/s] Traceback (most recent call last): File "train.py", line 167, in <module> train(config) File "train.py", line 105, in train pcds_pred = model(partial) File "/usr/local/miniconda3/envs/spd/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "../models/model_completion.py", line 137, in forward feat = self.feat_extractor(point_cloud) File "/usr/local/miniconda3/envs/spd/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "../models/model_completion.py", line 32, in forward l1_xyz, l1_points, idx1 = self.sa_module_1(l0_xyz, l0_points) # (B, 3, 512), (B, 128, 512) File "/usr/local/miniconda3/envs/spd/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "../models/utils.py", line 375, in forward new_xyz, new_points, idx, grouped_xyz = sample_and_group_knn(xyz, points, self.npoint, self.nsample, self.use_xyz, idx=idx) File "../models/utils.py", line 316, in sample_and_group_knn new_xyz = gather_operation(xyz, furthest_point_sample(xyz_flipped, npoint)) # (B, 3, npoint) File "/usr/local/miniconda3/envs/spd/lib/python3.7/site-packages/pointnet2_ops-3.0.0-py3.7-linux-x86_64.egg/pointnet2_ops/pointnet2_utils.py", line 54, in forward out = _ext.furthest_point_sampling(xyz, npoint) RuntimeError: false INTERNAL ASSERT FAILED at "pointnet2_ops/_ext-src/src/sampling.cpp":83, please report a bug to PyTorch. CPU not supported Looking forward to your answer

yizhiboyan commented 1 year ago

@yizhiboyan ,Hello, I understand from your answer that you have successfully reproduced the point cloud completion. Have you encountered this problem?Loaded compiled 3D CUDA chamfer distance 0%| | 0/906 [00:03<?, ?it/s] Traceback (most recent call last): File "train.py", line 167, in <module> train(config) File "train.py", line 105, in train pcds_pred = model(partial) File "/usr/local/miniconda3/envs/spd/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "../models/model_completion.py", line 137, in forward feat = self.feat_extractor(point_cloud) File "/usr/local/miniconda3/envs/spd/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "../models/model_completion.py", line 32, in forward l1_xyz, l1_points, idx1 = self.sa_module_1(l0_xyz, l0_points) # (B, 3, 512), (B, 128, 512) File "/usr/local/miniconda3/envs/spd/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "../models/utils.py", line 375, in forward new_xyz, new_points, idx, grouped_xyz = sample_and_group_knn(xyz, points, self.npoint, self.nsample, self.use_xyz, idx=idx) File "../models/utils.py", line 316, in sample_and_group_knn new_xyz = gather_operation(xyz, furthest_point_sample(xyz_flipped, npoint)) # (B, 3, npoint) File "/usr/local/miniconda3/envs/spd/lib/python3.7/site-packages/pointnet2_ops-3.0.0-py3.7-linux-x86_64.egg/pointnet2_ops/pointnet2_utils.py", line 54, in forward out = _ext.furthest_point_sampling(xyz, npoint) RuntimeError: false INI haven't encountered this problem beforeTERNAL ASSERT FAILED at "pointnet2_ops/_ext-src/src/sampling.cpp":83, please report a bug to PyTorch. CPU not supported Looking forward to your answer

Hi,I haven't encountered this problem before,because I use https://github.com/AllenXiangX/SPD_jittorSPD_jittor.This version uses jittor, similar to pytorch,use python,you could try this.This is my environment configuration information, I hope it can be helpful to you.I use ubuntu20.04LTS

packages in environment at /home/usrname/anaconda3/envs/spd_jittor:

#

Name Version Build Channel

_libgcc_mutex 0.1 main https://mirrors.ustc.edu.cn/anaconda/pkgs/main _openmp_mutex 5.1 1_gnu https://mirrors.ustc.edu.cn/anaconda/pkgs/main absl-py 1.4.0 pypi_0 pypi anyio 3.6.2 pypi_0 pypi argon2-cffi 21.3.0 pypi_0 pypi argon2-cffi-bindings 21.2.0 pypi_0 pypi argparse 1.4.0 pypi_0 pypi astunparse 1.6.3 pypi_0 pypi attrs 23.1.0 pypi_0 pypi backcall 0.2.0 pypi_0 pypi beautifulsoup4 4.12.2 pypi_0 pypi bleach 6.0.0 pypi_0 pypi ca-certificates 2023.01.10 h06a4308_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main cachetools 5.3.0 pypi_0 pypi certifi 2022.12.7 py37h06a4308_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main cffi 1.15.1 pypi_0 pypi charset-normalizer 3.1.0 pypi_0 pypi cycler 0.11.0 pypi_0 pypi debugpy 1.6.7 pypi_0 pypi decorator 5.1.1 pypi_0 pypi defusedxml 0.7.1 pypi_0 pypi easydict 1.10 pypi_0 pypi einops 0.6.1 pypi_0 pypi entrypoints 0.4 pypi_0 pypi fastjsonschema 2.16.3 pypi_0 pypi fonttools 4.38.0 pypi_0 pypi google-auth 2.17.3 pypi_0 pypi google-auth-oauthlib 0.4.6 pypi_0 pypi grpcio 1.54.0 pypi_0 pypi h5py 3.8.0 pypi_0 pypi idna 3.4 pypi_0 pypi importlib-metadata 6.6.0 pypi_0 pypi importlib-resources 5.12.0 pypi_0 pypi ipykernel 6.16.2 pypi_0 pypi ipython 7.34.0 pypi_0 pypi ipython-genutils 0.2.0 pypi_0 pypi ipywidgets 8.0.6 pypi_0 pypi jedi 0.18.2 pypi_0 pypi jinja2 3.1.2 pypi_0 pypi jittor 1.3.7.13 pypi_0 pypi jsonschema 4.17.3 pypi_0 pypi jupyter-client 7.4.9 pypi_0 pypi jupyter-core 4.12.0 pypi_0 pypi jupyter-server 1.24.0 pypi_0 pypi jupyterlab-pygments 0.2.2 pypi_0 pypi jupyterlab-widgets 3.0.7 pypi_0 pypi kiwisolver 1.4.4 pypi_0 pypi ld_impl_linux-64 2.38 h1181459_1 https://mirrors.ustc.edu.cn/anaconda/pkgs/main libffi 3.4.2 h6a678d5_6 https://mirrors.ustc.edu.cn/anaconda/pkgs/main libgcc-ng 11.2.0 h1234567_1 https://mirrors.ustc.edu.cn/anaconda/pkgs/main libgomp 11.2.0 h1234567_1 https://mirrors.ustc.edu.cn/anaconda/pkgs/main libstdcxx-ng 11.2.0 h1234567_1 https://mirrors.ustc.edu.cn/anaconda/pkgs/main markdown 3.4.3 pypi_0 pypi markupsafe 2.1.2 pypi_0 pypi matplotlib 3.5.3 pypi_0 pypiackcall 0.2.0 pypi_0 pypi beautifulsoup4 4.12.2 pypi_0 pypi bleach 6.0.0 pypi_0 pypi ca-certificates 2023.01.10 h06a4308_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main cachetools 5.3.0 pypi_0 pypi certifi 2022.12.7 py37h06a4308_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main cffi 1.15.1 pypi_0 pypi charset-normalizer
matplotlib-inline 0.1.6 pypi_0 pypi mistune 2.0.5 pypi_0 pypi munch 2.5.0 pypi_0 pypi nbclassic 0.5.5 pypi_0 pypi nbclient 0.7.3 pypi_0 pypi nbconvert 7.3.1 pypi_0 pypi nbformat 5.8.0 pypi_0 pypi ncurses 6.4 h6a678d5_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main nest-asyncio 1.5.6 pypi_0 pypi notebook 6.5.4 pypi_0 pypi notebook-shim 0.2.2 pypi_0 pypi numpy 1.21.6 pypi_0 pypi oauthlib 3.2.2 pypi_0 pypi open3d 0.9.0.0 pypi_0 pypi opencv-python 4.7.0.72 pypi_0 pypi openssl 1.1.1t h7f8727e_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main packaging 23.1 pypi_0 pypi pandocfilters 1.5.0 pypi_0 pypi parso 0.8.3 pypi_0 pypi pexpect 4.8.0 pypi_0 pypi pickleshare 0.7.5 pypi_0 pypi pillow 9.5.0 pypi_0 pypi pip 22.3.1 py37h06a4308_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main pkgutil-resolve-name 1.3.10 pypi_0 pypi prometheus-client 0.16.0 pypi_0 pypi prompt-toolkit 3.0.38 pypi_0 pypi protobuf 3.20.3 pypi_0 pypi psutil 5.9.5 pypi_0 pypi ptyprocess 0.7.0 pypi_0 pypi pyasn1 0.5.0 pypi_0 pypi pyasn1-modules 0.3.0 pypi_0 pypi pycparser 2.21 pypi_0 pypi pygments 2.15.1 pypi_0 pypi pymesh 1.0.2 pypi_0 pypi pyparsing 3.0.9 pypi_0 pypi pyrsistent 0.19.3 pypi_0 pypi python 3.7.16 h7a1cb2a_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main python-dateutil 2.8.2 pypi_0 pypi pyyaml 6.0 pypi_0 pypi pyzmq 25.0.2 pypi_0 pypi readline 8.2 h5eee18b_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main requests 2.28.2 pypi_0 pypi requests-oauthlib 1.3.1 pypi_0 pypi rsa 4.9 pypi_0 pypi scipy 1.7.3 pypi_0 pypi send2trash 1.8.0 pypi_0 pypi setuptools 65.6.3 py37h06a4308_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main six 1.16.0 pypi_0 pypi sniffio 1.3.0 pypi_0 pypi soupsieve 2.4.1 pypi_0 pypi sqlite 3.41.2 h5eee18b_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main tensorboard 2.11.2 pypi_0 pypi tensorboard-data-server 0.6.1 pypi_0 pypi tensorboard-plugin-wit 1.8.1 pypi_0 pypi tensorboardx 1.2 pypi_0 pypi termcolor 2.2.0 pypi_0 pypi terminado 0.17.1 pypi_0 pypi tinycss2 1.2.1 pypi_0 pypi tk 8.6.12 h1ccaba5_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main torch 1.11.0+cu113 pypi_0 pypi torchaudio 0.11.0+cu113 pypi_0 pypi torchvision 0.12.0+cu113 pypi_0 pypi tornado 6.2 pypi_0 pypi tqdm 4.65.0 pypi_0 pypi traitlets 5.9.0 pypi_0 pypi transforms3d 0.4.1 pypi_0 pypi typing-extensions 4.5.0 pypi_0 pypi urllib3 1.26.15 pypi_0 pypi wcwidth 0.2.6 pypi_0 pypi webencodings 0.5.1 pypi_0 pypi websocket-client 1.5.1 pypi_0 pypi werkzeug 2.2.3 pypi_0 pypi wheel 0.38.4 py37h06a4308_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main widgetsnbextension 4.0.7 pypi_0 pypi xz 5.2.10 h5eee18b_1 https://mirrors.ustc.edu.cn/anaconda/pkgs/main zipp 3.15.0 pypi_0 pypi zlib 1.2.13 h5eee18b_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main

yizhiboyan commented 1 year ago

Hi, You can use the Imdb reader of GRNet to get the .pcd files from the PCN dataset.

Hi,I have encountered a new problem.

When I train the completion task,use SPD_jittor, I get the optimizer: none, and I debug in train, py, about 128 lines

print('epoch: ', epoch_idx, 'optimizer: ', optimizer.param_groups[0].get('lr'))

Is there any problem with the parameter lr not present in param_groups [0]?
Looking forward to your reply.

by0619 commented 8 months ago

Thank you for your excellent work! I am a beginner who trained the point cloud completion task using your code and obtained the checkpoint and log files in the exp folder. How can I obtain the completed point cloud file (. pcd) generated during testing? Looking forward to your reply.

@yizhiboyan Hello, I have also obtained the checkpoint and log files and I tried to save the test output point cloud as author advised, but I have difficulty in storing and obtaining the output completed point cloud(.pcd) generated through testing , would you please share your code for obtaining output point cloud? Thanks a lot and great appreciation!

yizhiboyan commented 8 months ago

Hello,you would like to try this. test.py

if you have not outputfile folders

if not config.test.out_path:
    config.test.out_path = "./outpcdfiles"
output_dir = os.path.join(config.test.out_path, datetime.now().isoformat())

with tqdm(test_dataloader) as t:
    for model_idx, (taxonomy_id, model_id, data) in enumerate(t):
        taxonomy_id = taxonomy_id[0] if isinstance(taxonomy_id[0], str) else taxonomy_id[0].item()
        model_id = model_id[0]

        if config.dataset.name  in ['PCN', 'Completion3D']:
            with torch.no_grad():
                for k, v in data.items():
                    data[k] = helpers.var_or_cuda(v)

                partial = data['partial_cloud']
                gt = data['gtcloud']

                b, n, _ = partial.shape

                pcds_pred = model(partial.contiguous())           

                pcds_out = pcds_pred[-1]                       #pcds_out is 3rd stage completion cloud 
                pcds_out_num = pcds_out.cpu().numpy()           
                pcds_out_num = pcds_out_num.reshape(-1, 3)     #trans to numpy
                complete_output_folder = os.path.join(output_dir, taxonomy_id)
                os.makedirs(complete_output_folder, exist_ok=True)

                if config.dataset.id == "pcn":
                    pcd = o3d.geometry.PointCloud()
                    pcd.points = o3d.utility.Vector3dVector(pcds_out_num)

                    o3d.io.write_point_cloud(os.path.join(complete_output_folder, '%s.ply' % model_id), pcd)

                    loss_total, losses = completion_loss.get_loss(pcds_pred, partial, gt)

                    partial_matching = losses[0].item() * multiplier
                    loss_c = losses[1].item() * multiplier
                    loss_1 = losses[2].item() * multiplier
                    loss_2 = losses[3].item() * multiplier
                    loss_3 = losses[4].item() * multiplier
                    ……………………