Open lu-isza opened 3 years ago
Hello, I have also encountered this problem. I downloaded metadata but it displayed "KeyError: 'bb867e2064014279863c71a29b1eb381' " when i run convert_nuscens.py . So is it OK to download the corresponding test set and data set, or i must download the same data set as you? Could you give me some advice? Thank you :)
Hi, I downloaded nuscenes dataset and when I run the convert_nuscenes.py, it told me a file doesn't exist in the dataset (from what I remember this file is a frame from one of the RADAR folders and its timestamp is'1531883530444336' from sample_data json file from v1.0-trainval), then I checked the dataset and this file is indeed not there. I have downloaded many different datasets from nuscenes website but neither of them have this file. Thus I found which json file (from sample data json file) includes the request of this file and then deleted that specific cell in the json file. Then when I run convert_nuscenes.py again, it gave me killed. I am not sure what happened. Can someone give me some suggestions? Thank you!
same issue here. The reason is that the script will keep all annotations in memory so the memory is run out!
Hi, I downloaded nuscenes dataset and when I run the convert_nuscenes.py, it told me a file doesn't exist in the dataset (from what I remember this file is a frame from one of the RADAR folders and its timestamp is'1531883530444336' from sample_data json file from v1.0-trainval), then I checked the dataset and this file is indeed not there. I have downloaded many different datasets from nuscenes website but neither of them have this file. Thus I found which json file (from sample data json file) includes the request of this file and then deleted that specific cell in the json file. Then when I run convert_nuscenes.py again, it gave me killed. I am not sure what happened. Can someone give me some suggestions? Thank you!
same issue here. The reason is that the script will keep all annotations in memory so the memory is run out!
Hi @zye1996 , Do you have any idea how to solve this problem? Thanks
Hi, I downloaded nuscenes dataset and when I run the convert_nuscenes.py, it told me a file doesn't exist in the dataset (from what I remember this file is a frame from one of the RADAR folders and its timestamp is'1531883530444336' from sample_data json file from v1.0-trainval), then I checked the dataset and this file is indeed not there. I have downloaded many different datasets from nuscenes website but neither of them have this file. Thus I found which json file (from sample data json file) includes the request of this file and then deleted that specific cell in the json file. Then when I run convert_nuscenes.py again, it gave me killed. I am not sure what happened. Can someone give me some suggestions? Thank you!
same issue here. The reason is that the script will keep all annotations in memory so the memory is run out!
Hi @zye1996 , Do you have any idea how to solve this problem? Thanks
I solved the problem by using a server with more memory (at least 64GB). Otherwise the data flow must be changed for the repo
@zye1996 could you be more specific on how you solved this problem?
I have the same problem as you can see below.
Done loading in 24.039 seconds.
======
Reverse indexing ...
Done reverse indexing in 5.8 seconds.
======
scene_name scene-0001
Traceback (most recent call last):
File "convert_nuScenes.py", line 340, in <module>
main()
File "convert_nuScenes.py", line 154, in main
radar_pcs, _ = RadarPointCloud.from_file_multisweep(nusc,
File "/home/fabrizioschiano/repositories/CenterFusion/src/tools/../lib/utils/pointcloud.py", line 116, in from_file_multisweep
current_pc = cls.from_file(osp.join(nusc.dataroot, current_sd_rec['filename']))
File "/home/fabrizioschiano/.virtualenvs/centerfusion/lib/python3.8/site-packages/nuscenes/utils/data_classes.py", line 386, in from_file
with open(file_name, 'rb') as f:
FileNotFoundError: [Errno 2] No such file or directory: '../../data/nuscenes/samples/RADAR_FRONT_RIGHT/n015-2018-07-18-11-07-57+0800__RADAR_FRONT_RIGHT__1531883530444336.pcd'
@zye1996 could you be more specific on how you solved this problem?
I have the same problem as you can see below.
Done loading in 24.039 seconds. ====== Reverse indexing ... Done reverse indexing in 5.8 seconds. ====== scene_name scene-0001 Traceback (most recent call last): File "convert_nuScenes.py", line 340, in <module> main() File "convert_nuScenes.py", line 154, in main radar_pcs, _ = RadarPointCloud.from_file_multisweep(nusc, File "/home/fabrizioschiano/repositories/CenterFusion/src/tools/../lib/utils/pointcloud.py", line 116, in from_file_multisweep current_pc = cls.from_file(osp.join(nusc.dataroot, current_sd_rec['filename'])) File "/home/fabrizioschiano/.virtualenvs/centerfusion/lib/python3.8/site-packages/nuscenes/utils/data_classes.py", line 386, in from_file with open(file_name, 'rb') as f: FileNotFoundError: [Errno 2] No such file or directory: '../../data/nuscenes/samples/RADAR_FRONT_RIGHT/n015-2018-07-18-11-07-57+0800__RADAR_FRONT_RIGHT__1531883530444336.pcd'
I don't think we have the same problem. Simply check whether you can access the file the error prompts from the directory you are running the script.
Hi @zye1996, thanks for your reply! I found a solution to my problem and reported it here
@zye1996, is this instead your problem? I am now experiencing the same, I think.
======
Loading NuScenes tables for version v1.0-trainval...
23 category,
8 attribute,
4 visibility,
64386 instance,
12 sensor,
10200 calibrated_sensor,
2631083 ego_pose,
68 log,
850 scene,
34149 sample,
2631083 sample_data,
1166187 sample_annotation,
4 map,
Done loading in 28.582 seconds.
======
Reverse indexing ...
Done reverse indexing in 6.2 seconds.
======
scene_name scene-0003
scene_name scene-0012
scene_name scene-0013
scene_name scene-0014
scene_name scene-0015
scene_name scene-0016
scene_name scene-0017
scene_name scene-0018
scene_name scene-0035
scene_name scene-0036
scene_name scene-0038
scene_name scene-0039
scene_name scene-0092
Killed
@zye1996, is this instead your problem? I am now experiencing the same, I think.
====== Loading NuScenes tables for version v1.0-trainval... 23 category, 8 attribute, 4 visibility, 64386 instance, 12 sensor, 10200 calibrated_sensor, 2631083 ego_pose, 68 log, 850 scene, 34149 sample, 2631083 sample_data, 1166187 sample_annotation, 4 map, Done loading in 28.582 seconds. ====== Reverse indexing ... Done reverse indexing in 6.2 seconds. ====== scene_name scene-0003 scene_name scene-0012 scene_name scene-0013 scene_name scene-0014 scene_name scene-0015 scene_name scene-0016 scene_name scene-0017 scene_name scene-0018 scene_name scene-0035 scene_name scene-0036 scene_name scene-0038 scene_name scene-0039 scene_name scene-0092 Killed
Yes, in this case, you need to use a computer with larger RAM.
yeah, in the end I was able to run the file convert_nuscenes.py
by making sure that my RAM was "as free as possible" before running the file.
I was thinking: is it correct that all the data annotations are kept in RAM?
Here
I solved the problem by using a server with more memory (at least 64GB). Otherwise the data flow must be changed for the repo
you said that the "data flow" must be changed and I think that I agree since it does not sound ok for me to put all the annotations in RAM. However, I would be really glad to know the repo's author opinion on this matter
@zye1996 , do you have any idea on how would you do that? Thanks for your help!
Hello, so for now have you solved this problem? If you have solved it could you please share with me how to do with this issue? Thank you
Hi, I downloaded nuscenes dataset and when I run the convert_nuscenes.py, it told me a file doesn't exist in the dataset (from what I remember this file is a frame from one of the RADAR folders and its timestamp is'1531883530444336' from sample_data json file from v1.0-trainval), then I checked the dataset and this file is indeed not there. I have downloaded many different datasets from nuscenes website but neither of them have this file. Thus I found which json file (from sample data json file) includes the request of this file and then deleted that specific cell in the json file. Then when I run convert_nuscenes.py again, it gave me killed. I am not sure what happened. Can someone give me some suggestions? Thank you!