tianweiy / CenterPoint

MIT License
1.88k stars 455 forks source link

How can I get the "demo/nuScenes/demo_infos.pkl" file in the demo_config.py! #200

Closed s1mpleee closed 2 years ago

s1mpleee commented 3 years ago

tried to replace the demo_infos.pkl with anno file under:

train_anno = "demo/nuScenes/demo_infos.pkl"
val_anno = "demo/nuScenes/demo_infos.pkl"
# val_anno = "/media/adas/ubuntu3/nuscenes/infos_val_10sweeps_withvelo_filter_True.pkl"
test_anno = None

but got an error like this

Traceback (most recent call last):
  File "tools/demo.py", line 130, in <module>
    main()
  File "tools/demo.py", line 82, in main
    for i, data_batch in enumerate(data_loader):
  File "/home/adas/anaconda3/envs/py37/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 517, in __next__
    data = self._next_data()
  File "/home/adas/anaconda3/envs/py37/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1182, in _next_data
    idx, data = self._get_data()
  File "/home/adas/anaconda3/envs/py37/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1148, in _get_data
    success, data = self._try_get_data()
  File "/home/adas/anaconda3/envs/py37/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1019, in _try_get_data
    " at the beginning of your code") from None
RuntimeError: Too many open files. Communication with the workers is no longer possible. Please increase the limit using `ulimit -n` in the shell or change the sharing strategy by calling `torch.multiprocessing.set_sharing_strategy('file_system')` at the beginning of your code

I assumed that infos_val_10sweeps_withvelo_filter_True.pkl was too big to use in demo.py.

tianweiy commented 3 years ago

this demo file is no longer supported. You can see the instructions in the earlier branch if you want to use it https://github.com/tianweiy/CenterPoint/tree/607f01a46a447d83c12d56eb9f699d69422ebd0f

it gets a link to the demo folder.

s1mpleee commented 3 years ago

this demo file is no longer supported. You can see the instructions in the earlier branch if you want to use it https://github.com/tianweiy/CenterPoint/tree/607f01a46a447d83c12d56eb9f699d69422ebd0f

it gets a link to the demo folder.

thanks for your reply, in fact, I really wonder how can I test the model speed during inference, which file should I run to get the fps result.

tianweiy commented 3 years ago

this is a bit complicated and most previous reported numbers are a bit noisy. The less rigorous one (the one we do in the original paper) is just using dist_test.py with the flag "--speed_test" and a batch size of 1. But this result is not that accurate, as we hide the latency of voxelization in the dataloader (through multiprocessing)

To be more rigorous, the way Waymo benchmark the latency is through a script like this https://github.com/tianweiy/CenterPoint/blob/1acc72ac7f1e9e21d0c38a2c4353e8b97f343336/tools/simple_inference_waymo.py#L139

You will add a function test_time

import time 
def test_time(func):
    def inner(*args, **kwargs):
        torch.cuda.synchronize()
        tic = time.perf_counter()
        data_dict = func(*args, **kwargs)
        torch.cuda.synchronize()
        print(time.perf_counter()-tic)
        return data_dict 

    return inner 

and do

test_time(process_example)(...)

In this way, we didn't count the IO time but all the other latency should be accurate

s1mpleee commented 3 years ago

accurate

thanks a lot, another issue occurs to me is that when I run the single_inference.py, I couldn't install these python libary needed for running it.

import rospy
import ros_numpy

and

from std_msgs.msg import Header
import sensor_msgs.point_cloud2 as pc2
from sensor_msgs.msg import PointCloud2, PointField
from jsk_recognitio:_msgs.msg import BoundingBox, BoundingBoxArray

Is there a install_guide or something can help me through it?

tianweiy commented 3 years ago

Hi, actually, not really sure if you want to use this file. It is for inference with ros. The dist_test.py is the default file for inference using standard pytorch.