chenhbo / FUNSR

MIT License
7 stars 2 forks source link

About the input #1

Open YuehChuan opened 4 days ago

YuehChuan commented 4 days ago

Hi chenhbo, thanks for sharing this project, I wonder the input is Pose of the Ultrasound probe, and US images without segmentation? I wonder is there example input data. https://zenodo.org/records/8004388 I found this, where I can use the .ply Thank you, e.g. python run_normalizedSpace.py --gpu 0 --conf confs/conf.conf --dataname case000007.nii --dir case000070.nii

chenhbo commented 3 days ago

Hi, Thank you for your interest. I upload an example file of case0000070.nii_ds.ply. Here, the point cloud is downsampled to the size of 20000*3.

As for Public Dataset #4, you can use numpy.where() to extract the point cloud (XYZ coordinates) from the segmented .nii files. With these points, using open3d->farthest_point_down_sample and open3d.io->write_point_cloud to get the downsampled *.ply file.

YuehChuan commented 2 days ago

charm Thank you for your support! It works charm! :)

YuehChuan commented 2 days ago

mesh

Environment: ubuntu18.04 CUDA11.4 1080ti graphic card MeshLab2020.02-linux python3.8 python environmnet I use venv python -m venv venv source venv/bin/activate

note: PyMCubes==0.1.4

pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113 pip install open3d pip install tqdm pyhocon==0.3.57 trimesh PyMCubes==0.1.4 scipy matplotlib

chenhbo commented 2 days ago

Thank you for your feedback!