drprojects / superpoint_transformer

Official PyTorch implementation of Superpoint Transformer introduced in [ICCV'23] "Efficient 3D Semantic Segmentation with Superpoint Transformer" and SuperCluster introduced in [3DV'24 Oral] "Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering"
MIT License
637 stars 78 forks source link

Test with custom dataset, and running error with existing dataset #31

Closed chqzeng closed 1 year ago

chqzeng commented 1 year ago

Hi @drprojects , great project! quite new to this type of models, some questions hope you can help with:

thanks, Chui

drprojects commented 1 year ago

Hi @chqzeng, thanks for you interest in the project !

I have some point cloud in ( x, y, z, intensity) format, can I use one of the existing model s3dis do the training and apply to my custom dataset? which script should I use or modify from?

This will depend on your dataset :

my dataset is quite special (like cup shapes), do I necessarily need some custom training? , If I need to prepare training dataset, what is the format I need to prepare?

For creating your own dataset, please refer to the Dataset. If you have ( x, y, z, intensity) attributes, you might want to get inspiration from DALES too. I draw your attention to the fact that SPT is primarily designed for dealing with large 3D scenes with multiple objects of complex geometry. If by "cup shapes" you mean your dataset is composed of isolated objects (à la ModelNet), then this project may be a little overkill for your needs... I don't know what your task is but have you tried a simple PointNet or PointNet++ first ?

run command: python src/train.py experiment=s3dis_11g datamodule.fold=5 run command: `python src/eval.py experiment=s3dis datamodule.fold=5 ckpt_path=/path/to/your/checkpoint.ckpt``

Both errors seem connected, there seems to be a problem when reading the raw files. This process is done in parallel, to save time (reading S3DIS file format is quite laborious). From the tracebacks you share, I cannot interpret what is happening. Two suggestions:

Best, Damien

chqzeng commented 1 year ago

Thank you Damien for the quick and detailed feedback.

I am not very familiar with your current data processing flow here, so yes I will take some time to look into the Dataset preparation info you provide. my data is close to the Dales, not a single cup, but a relative complicated scene with many culvert pipes for road construction with pebbles in background. I would like to try your model most for its "light-weight" with less parameter size.

For the running error, I confirm the data structure is correct, and tried to set the processes=1 still have the issue. I am running in Windows subsystem for Linux (WSL). I am not sure whether that is the reason. the /etc/shm is 16GB I attached two new runs on two difference machine, both as WSL, and failed log. if you have some idea about why cause the error, then largely appreciated. WSL_run_log.txt WSL_run_log2.txt

thanks , Chui

drprojects commented 1 year ago

I can't provide much support for Windows-based machines, or docker environment, etc. The code was only tested on linux machines.

Looking at your second log file:

RuntimeError: DataLoader worker (pid 10844) is killed by signal: Bus error. It is possible that dataloader's workers are out of shared memory. Please try to raise your shared memory limit.

It seems you have CPU RAM memory issues, not just GPU memory issues. Check out this issue for directions.

PS: if you are interested in this project, don't forget to give it a ⭐, it matters to us !