Closed sanmin0312 closed 7 months ago
Sorry, my fault. The code is released in a hurry and contains some bugs. I will fix it soon.
Hi, I fixed this issue. Now you can use ray_metrics.py to calculate RayIoU.
First you need to save the predictions (200x200x16, np.uint8) with the compressed npz
format. The filename is the sample token. For example:
prediction/fbocc
├── 000681a060c04755a1537cf83b53ba57.npz
├── 000868a72138448191b4092f75ed7776.npz
├── 0017c2623c914571a1ff2a37f034ffd7.npz
├── ...
Then, you can use ray_metrics.py
to evaluate on our new metric:
python ray_metrics.py --pred-dir prediction/sparseocc
BTW, you can also use old_metrics.py
to evaluate on the old voxel-based mIoU. The usage is the same.
We will add this guide into README.
Thank you for the fast response! I have one more question that how can I generate 'nuscenes_infos_val.pkl'? The script 'gen_sweep_info.py' also needs 'nuscenes_infos_val.pkl' and 'nuscenes_infos_train.pkl' but I can't find how to generate it. The gdrive link only provide 'nuscenes_infos_val_sweep.pkl'. Is it the same one to 'nuscenes_infos_val.pkl'?
I tried to run 'ray_metrics.py' as you guided with the change 'nuscenes_infos_val.pkl' to 'nuscenes_infos_val_sweep.pkl' in line 13, but following error comes out.
Using /home/user/.cache/torch_extensions/py38_cu118 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file /home/user/.cache/torch_extensions/py38_cu118/dvr/build.ninja...
Building extension module dvr...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module dvr...
Traceback (most recent call last):
File "./ray_metrics_new.py", line 59, in ulimit -n
in the shell or change the sharing strategy by calling torch.multiprocessing.set_sharing_strategy('file_system')
at the beginning of your code
You need to preprocess the dataset using mmdet3d. Then you will get nuscenes_infos_val.pkl
.
Using nuscenes_infos_val_sweep.pkl
also works. Your error is due to your OS settings. you can set ulimit -n 10240
to fix it.
Thanks a lot! I solved my problem by adding
import torch.multiprocessing
torch.multiprocessing.set_sharing_strategy('file_system')
and it works well. Thank you again for the super rapid response :)
Thank you for sharing your awsome work.
I'm trying to run "ray_metrics.py" but there is no module named nusc and class nuSceneDataset (in line 13) in the whole directory. Can you help me?