Closed kcabhishek closed 3 years ago
You need just a few modifications:
partition/partition.py
line 42, if you don't want to use a train/test splitread_las
if your cloud is a plyread_ply
in provider.py
to read your data format.Then run:
python partition/partition.py --dataset custom_dataset --ROOT_PATH data_location --voxel_width 0.05 --reg_strength 0.5
values for voxel_width
should be adapted so that your cloud have more or less uniform sampling precision, and reg_strength
depending on the desired granularity of the partition
Then use the function visualize
with --output_type ip
to see the results.
I am running with this.
python partition/partition.py --dataset custom_dataset --ROOT_PATH partition/data/ --voxal_width 0.05 --reg_strength 0.5
Let me explain what I am trying to achieve here so that I won't waste your precious time asking something that this code isn't supposed to do. So, I have a 3D model of a office and my goal is to partition or segment the objects in that office. Like chairs, tables, etc. Am I in the right place? Thank you for your help though. I really appreciate it.
It's voxel
not voxal
.
Am I in the right place?
Sure, provided that your office is similar to the ones in S3DIS, or that you have enough training data to train a model from scratch. This repo won't give you instance/object segmentation however, just the class of each points.
What kind of data do I need to train my model from scratch? Also what do you mean by just the class of each points?
You need at least a dozen scenes with points annotated by semantic class.
The output of SPG is a class for each points. If you want object detection or instance segmentation you should use a other network. Checkout torch-points3d for example, they have most tasks implemented.
I am trying to do partition on a .ply file, will I be able to do that with this? If so, how?