Open MangoDragon opened 7 months ago
Hi @MangoDragon
Thank you for your interest in SeaBird.
Is it possible to test HoP without using the full (very large) dataset from nuscenes or training it? I would just like to give it data from a smaller dataset such as the mini subset (or my own recordings etc).
mini
subset should be possible. Please check the converter file. The current converter file supports train
, val
and test
sets. To support mini
set:
mini
split.Extend the code after this line:
elif nuscenes_version == 'v1.0-mini':
set = 'mini'
dataset = pickle.load(
open('./data/nuscenes/%s_infos_%s.pkl' % (extra_tag, set), 'rb'))
for id in range(len(dataset['infos'])):
if id % 10 == 0:
print('%d/%d' % (id, len(dataset['infos'])))
info = dataset['infos'][id]
# get sweep adjacent frame info
sample = nuscenes.get('sample', info['token'])
dataset['infos'][id]['scene_token'] = sample['scene_token']
with open('./data/nuscenes/%s_infos_%s.pkl' % (extra_tag, set),
'wb') as fid:
pickle.dump(dataset, fid)
see the 3D bounding boxes.
The HoP baseline inherently uses mmdetection3d codebase. You could use the visualization guide to visualize the 3D boxes.
PS: It would be great if you could support the SeaBird repo by starring it.
Thank you for your reply! I starred the repo too. What should I use for the pkl? Are they created from the large dataset or do I need to download them from somewhere?
What should I use for the pkl?
The nuscenes_data_prep()
function should get the pkl files. The add_ann_adj_info()
function in the converter file works on pkl files created by the nuscenes_data_prep()
function.
I starred the repo too.
Thank you for your support :smile:
Using the following code:
elif nuscenes_version == 'v1.0-mini':
# Allow for the mini dataset -------------------------------
set = 'mini'
dataset = pickle.load(
open('./data/nuscenes/%s_infos_%s.pkl' % (extra_tag, set), 'rb'))
for id in range(len(dataset['infos'])):
if id % 10 == 0:
print('%d/%d' % (id, len(dataset['infos'])))
info = dataset['infos'][id]
# get sweep adjacent frame info
sample = nuscenes.get('sample', info['token'])
dataset['infos'][id]['scene_token'] = sample['scene_token']
with open('./data/nuscenes/%s_infos_%s.pkl' % (extra_tag, set),
'wb') as fid:
pickle.dump(dataset, fid)
else:
raise NotImplementedError(f'{nuscenes_version} not supported')
and
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Data converter arg parser')
parser.add_argument('--split', default='trainval', help='split of the dataset')
args = parser.parse_args()
dataset = 'nuscenes'
version = 'v1.0'
assert args.split in ['trainval', 'test', 'mini'] # added 'mini' ----------------------------------------`
When using the command: python tools/create_data_bevdet.py --split mini
I get the following error:
`Traceback (most recent call last):
File "tools/create_data_bevdet.py", line 182, in <module>
add_ann_adj_info(extra_tag,
File "tools/create_data_bevdet.py", line 147, in add_ann_adj_info
open('./data/nuscenes/%s_infos_%s.pkl' % (extra_tag, set), 'rb'))
FileNotFoundError: [Errno 2] No such file or directory: './data/nuscenes/bevdetv2-nuscenes_infos_mini.pkl'
Only 2 pkl's are created.
If I use python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag nuscenes --version v1.0-mini
I get the following output:
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 81/81, 3.7 task/s, elapsed: 22s, ETA: 0s
Create GT Database of NuScenesDataset
[ ] 0/323, elapsed: 0s, ETA:Traceback (most recent call last):
File "tools/create_data.py", line 267, in <module>
nuscenes_data_prep(
File "tools/create_data.py", line 89, in nuscenes_data_prep
create_groundtruth_database(dataset_name, root_path, info_prefix,
File "c:\users\user\anaconda3\seabird\hop\tools\data_converter\create_gt_database.py", line 240, in create_groundtruth_database
example = dataset.pipeline(input_dict)
File "c:\users\user\anaconda3\seabird\hop\mmdet3d\datasets\pipelines\compose.py", line 49, in __call__
data = t(data)
File "c:\users\user\anaconda3\seabird\hop\mmdet3d\datasets\pipelines\loading.py", line 682, in __call__
results = self._load_bboxes_3d(results)
File "c:\users\user\anaconda3\seabird\hop\mmdet3d\datasets\pipelines\loading.py", line 577, in _load_bboxes_3d
results['gt_bboxes_3d'] = results['ann_info']['gt_bboxes_3d']
KeyError: 'ann_info'
Edit: One more thing,
For the tutorial you referenced, I found the code:
import mmcv
import numpy as np
from mmengine import load
from mmdet3d.visualization import Det3DLocalVisualizer
from mmdet3d.structures import CameraInstance3DBoxes
info_file = load('demo/data/kitti/000008.pkl')
cam2img = np.array(info_file['data_list'][0]['images']['CAM2']['cam2img'], dtype=np.float32)
bboxes_3d = []
for instance in info_file['data_list'][0]['instances']:
bboxes_3d.append(instance['bbox_3d'])
gt_bboxes_3d = np.array(bboxes_3d, dtype=np.float32)
gt_bboxes_3d = CameraInstance3DBoxes(gt_bboxes_3d)
input_meta = {'cam2img': cam2img}
visualizer = Det3DLocalVisualizer()
img = mmcv.imread('demo/data/kitti/000008.png')
img = mmcv.imconvert(img, 'bgr', 'rgb')
visualizer.set_image(img)
# project 3D bboxes to image
visualizer.draw_proj_bboxes_3d(gt_bboxes_3d, input_meta)
visualizer.show()
The pkl file it loads seems to be part of the KITTI database. How would the image and pkl loading work regarding nuscenes?
Hi,
I wanted to write this in a discussion, but I couldn't find the section for it.
Is it possible to test HoP without using the full (very large) dataset from nuscenes or training it? I would just like to give it data from a smaller dataset such as the mini subset (or my own recordings etc) and see the 3D bounding boxes.
Kind regards