astra-vision / PaSCo

[CVPR 2024 Oral, Best Paper Award Candidate] Official repository of "PaSCo: Urban 3D Panoptic Scene Completion with Uncertainty Awareness"
https://astra-vision.github.io/PaSCo/
Apache License 2.0
155 stars 14 forks source link

Issue with Extract point features #17

Open the-king-of-yhq opened 3 days ago

the-king-of-yhq commented 3 days ago

flatten = torch.sparse_coo_tensor(indices_non_zeros, weight_per_point, matrix_shape) RuntimeError: size is inconsistent with indices: for dim 0, size is 2 but found index 1100320154

anhquancao commented 3 days ago

which command did you run?

the-king-of-yhq commented 3 days ago

I perform this command: cd PaSCo/WaffleIron_mod python extract_point_features.py \ --path_dataset /gpfsdswork/dataset/SemanticKITTI \ --ckpt pretrained_models/WaffleIron-48-256__kitti/ckpt_last.pth \ --config configs/WaffleIron-48-256__kitti.yaml \ --result_folder /lustre/fsn1/projects/rech/kvd/uyl37fq/pasco_preprocess/kitti/waffleiron_v2 \ --phase val \ --num_workers 3 \ --num_votes 10 \ --batch_size 2

anhquancao commented 3 days ago

Do you use pytorch 1.11.0? And can you check if this self.im_idx is not empty?

the-king-of-yhq commented 3 days ago

I get the same error using both pytorch 1.13.0 and 1.11.0.

anhquancao commented 3 days ago

I am not sure why, I am asking the author of WaffleIron

the-king-of-yhq commented 3 days ago

OK!This error occurs when you run a little bit: 0%| | 2/23245 [00:03<12:14:45, 1.90s/it] flatten = torch.sparse_coo_tensor(indices_non_zeros, weight_per_point, matrix_shape) RuntimeError: size is inconsistent with indices: for dim 0, size is 2 but found index 140137744059904

the-king-of-yhq commented 3 days ago

Or can you provide the generated data waffleiron_v2?

anhquancao commented 3 days ago

Sorry, I didn't keep the files as our cluster has the policy of deleting data after 30 days unused. Still, you need this for later experiments with Robo3D

the-king-of-yhq commented 3 days ago

Thank you for your reply. I'll look for the error first

anhquancao commented 3 days ago

Another solution is to use the official code of WaffleIron and modify the file eval_kitti.py following extract_point_features.py.

Also, modify the kitti loader following here to keep only every fifth scan and here to also extract feature for training set..

This might take 1 or 2 days of work.

Then you can use the same command

python extract_point_features.py
--path_dataset /gpfsdswork/dataset/SemanticKITTI
--ckpt pretrained_models/WaffleIron-48-256__kitti/ckpt_last.pth
--config configs/WaffleIron-48-256__kitti.yaml
--result_folder /lustre/fsn1/projects/rech/kvd/uyl37fq/pasco_preprocess/kitti/waffleiron_v2
--phase val
--num_workers 3
--num_votes 10
--batch_size 2
the-king-of-yhq commented 3 days ago

Hi! How much storage does waffleiron_v2 require?

anhquancao commented 3 days ago

Sorry, I never measure it.

the-king-of-yhq commented 3 days ago

Is 1T enough? fq/pasco_preprocess/kitti/waffleiron_v2/sequences/00/seg_feats_tta/003615.pkl 16%|███████▊ | 3624/23245 [28:14<2:32:52, 2.14it/s] It need about 500G。

anhquancao commented 3 days ago

You should only save ~4000 files. Did you add this filter? We only need to extract features for scan 0, 5, 10, 15, ....

Additionally, you can optimize the dataset type used here. For example, vote -> uint8, embedding-> torch.float16

the-king-of-yhq commented 3 days ago

I mean doesn't he want to save 23245/5=4649 file? But I already need about 500GB running to 3624/5, so does he need about 4TB?

anhquancao commented 3 days ago

I didn't know it is that heavy. I suggest to optimize the data type here.

Also, I am saving embedding of 10 runs thus it has size of (10, N, 256). During training, I randomly take 1 of the them. Thus, maybe you can keep only 5 to save space.

anhquancao commented 3 days ago

Btw, could you tell me how did you solve the issue?

the-king-of-yhq commented 2 days ago

Sorry,I could not solve. I optimize the dataset type that vote -> uint8, embedding-> torch.float16, but it still need 600MB for only one pkl, i.e., 000000.pkl

anhquancao commented 2 days ago

I suggest change "embedding": embedding to "embedding": embedding[:1]. This probably will decrease memory by ~ 10 times

the-king-of-yhq commented 2 days ago

Does this affect the accuracy of the results?

anhquancao commented 2 days ago

If you retrain the model, I would say no. As for a pretrained model, I'm not certain -- you would need to evaluate it. In my view, this would primarily impact the variance of the predictions, which could influence uncertainty estimation. However, since I am using test-time augmentation, I don't expect the effect to be significant.