Open YuehChuan opened 4 days ago
Hi, Thank you for your interest. I upload an example file of case0000070.nii_ds.ply. Here, the point cloud is downsampled to the size of 20000*3.
As for Public Dataset #4, you can use numpy.where()
to extract the point cloud (XYZ coordinates) from the segmented .nii files.
With these points, using open3d->farthest_point_down_sample
and open3d.io->write_point_cloud
to get the downsampled *.ply file.
Thank you for your support! It works charm! :)
Environment: ubuntu18.04 CUDA11.4 1080ti graphic card MeshLab2020.02-linux python3.8 python environmnet I use venv python -m venv venv source venv/bin/activate
note: PyMCubes==0.1.4
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113 pip install open3d pip install tqdm pyhocon==0.3.57 trimesh PyMCubes==0.1.4 scipy matplotlib
Thank you for your feedback!
Hi chenhbo, thanks for sharing this project, I wonder the input is Pose of the Ultrasound probe, and US images without segmentation? I wonder is there example input data. https://zenodo.org/records/8004388 I found this, where I can use the .ply Thank you, e.g. python run_normalizedSpace.py --gpu 0 --conf confs/conf.conf --dataname case000007.nii --dir case000070.nii