lyft / nuscenes-devkit

Devkit for the public 2019 Lyft Level 5 AV Dataset (fork of https://github.com/nutonomy/nuscenes-devkit)
Other
365 stars 103 forks source link

Are "BETA_PLUS_PLUS LiDARS" and "BETA_PLUS_PLUS Cameras" sensor used in the dataset? #79

Open Ckerrr opened 4 years ago

Ckerrr commented 4 years ago

It's mentioned that there are two kinds of LiDAR and camera sensor are used in the dataset (https://level5.lyft.com/dataset/).

However, there seems no way to identify the difference in the dataset available.

  1. Are they actually included in the dataset?
  2. If "yes" for Q1, how to identify the sensor source of a specific frame?

Thanks!

gledsonmelotti commented 4 years ago

BETA_V0 LiDARS: • One 40-beam roof LiDAR and two 40-beam bumper LiDARs. • Each LiDAR has an azimuth resolution of 0.2 degrees. • All three LiDARs jointly produce ~216,000 points at 10 Hz. • The firing directions of all LiDARs are synchronized to be the same at any given time. BETA_V0 Cameras: • Six wide-field-of-view (WFOV) cameras uniformly cover 360 degrees field of view (FOV). Each camera has a resolution of 1224x1024 and a FOV of 70°x60°. • One long-focal-length camera is mounted slightly pointing up primarily for detecting traffic lights. The camera has a resolution of 2048x864 and a FOV of 35°x15°. • Every camera is synchronized with the LiDAR such that the LiDAR beam is at the center of the camera's field of view when the camera is capturing an image.

BETA_PLUS_PLUS LiDARS: • The only difference in LiDARs between Beta-V0 and Beta++ is the roof LiDAR, which is 64-beam for Beta++. • The synchronization of the LiDARs is the same as Beta-V0. BETA_PLUS_PLUS Cameras: • Six wide-field-of-view (WFOV) high dynamic range cameras uniformly cover 360 degrees field of view (FOV). Each camera has a resolution of 1920x1080 and a FOV of 82°x52°. • One long-focal-length camera is mounted slightly pointing up primarily for detecting traffic lights. The camera has a resolution of 1920x1080 and a FOV of 27°x17°. • Every camera is synchronized with the LiDAR such that the LiDAR beam is at the center of the camera's field of view when the camera is capturing an image.

patrickeala commented 3 months ago

I assume that both sensor configurations (BETA++ and BETA_v0) were used in the dataset. I'm interested in the roof lidar which was changed from 40-beam to 64-beam. I have two questions:

  1. What is the data split between Beta++ and Beta_v0?
  2. For the 64-beam roof lidar, what lidar model are they using? (pandar64, velodyne, etc.)
gledsonmelotti commented 3 months ago

I assume that both sensor configurations (BETA++ and BETA_v0) were used in the dataset. I'm interested in the roof lidar which was changed from 40-beam to 64-beam. I have two questions:

  1. What is the data split between Beta++ and Beta_v0?
  2. For the 64-beam roof lidar, what lidar model are they using? (pandar64, velodyne, etc.)

I'm sorry. I don't know.