filaPro / oneformer3d

[CVPR2024] OneFormer3D: One Transformer for Unified Point Cloud Segmentation
Other
324 stars 33 forks source link

Using my own dataset #64

Closed Revol123 closed 2 months ago

Revol123 commented 4 months ago

Thank you for your great work on this project. I am currently working on my own indoor dataset in USDZ format, and I'd like to test it with your pre-trained model. However, I'm not sure what the next step should be.

Should I convert my dataset into a format like ScanNet or S3DIS? If so, which one is better? Alternatively, would it be feasible to modify test.py or even write a new tester to test my dataset? I'd appreciate any recommendations you might have.

Thank you for your assistance.

oneformer3d-contributor commented 4 months ago

Yes, converting to S3DIS format should be fine. We already got comments, that it works fine. For ScanNet we use unsupervised superpoint clustering, and probably you don't want to figure it out for your dataset.

Revol123 commented 4 months ago

Thanks for your response. I have another question now. For example, in the S3DIS dataset structure:

Area_1
└── conferenceRoom_2
    ├── Annotations
    │   ├── Icon
    │   ├── beam_1.txt
    │   ├── board_1.txt
    │   └── the other txt files
    ├── Icon
    └── conferenceRoom_2.txt

I've obtained all the points with colors, like in conferenceRoom_2.txt above. But it seems there are no labels for each point in my dataset, so I don't have files in the Annotations folder. Is there another way I can test my dataset without using annotation files in your code? Or if you have any other recommendations, I would greatly appreciate it!

oneformer3d-contributor commented 3 months ago

Actually data loading pipeline in test.py should not require gt annotations. Just comment the evaluation call at some point and it should be fine.

D3xter1922 commented 2 months ago

I was trying to train oneformer with my own dataset. I have converted it into the S3DIS format. Training goes fine on a 24GB card however it runs out of VRAM on validation step. Batch size for both is 1. Is there a config parameter I am missing here?

oneformer3d-contributor commented 2 months ago

It is strange to run out of GPU memory on validation. Because if it is CPU memory we recommend to increase inst_score_thr or decrease topk_insts. For GPU memory you can try to decrease num_points.

D3xter1922 commented 2 months ago

Thank you very much for the help. It worked and I added a PointSample in my test_pipeline, however, during evaluation, it is not considering those sampled points during evaluation, and I get the error that gt_labels and seg_labels are of different lengths. Is there a configuration option for that?(to make sure evaluation also considers the same sampled points?) Thanks!