Closed MartinEthier closed 6 months ago
VELODYNE_128_ELEVATION_MAPPING
, it's taken straight from the VLS-128 docs. As a side-note: if you do not care about rendering lidar point clouds, you can typically achieve good image quality without missing points. For some datasets, adding missing points can even reduce image quality (we suspect poor calibration to be the root cause). nuScenes is one of those if I remember correctly.
I haven't found anything useful regarding ring numbers in the official docs, but here's a forum answer that has information on it: https://answers.ros.org/question/59743/organizing-point-cloud-from-hdl-32e/. As far as I can see, the ID doesn't simply go from 0 to 127 as elevation increases, it follows a different sequence, which seems to be what your elevation map is also showing so I think it's the same thing.
Regarding your side-note, I currently am not focused on rendering point clouds but I might in the future. I'll probably just try with and without missing points and see what gives the best performance.
Also, which config parameters were changed for training runs across datasets? It seems like the NeuRAD model/training config was the same for all datasets and only the dataparser itself is changed for each dataset. Is this correct? I am using NeuRAD as a baseline for a benchmark I'm constructing so I want to make sure I use the same methodology as you did.
Yeah, I'd assume we follow the same convention then (as ours is based on velodyne's user manual).
We used the same config for simplicity, but did not spend any time optimizing those to squeeze out max average performance. We mainly ran our experiments on PandaSet, and just used those parameters for the other datasets as well. However, for better performance we have larger models with longer training cycles, so you could try running neurader
, neuradest
or neurad-2x-paper
, see method configs for their difference. This gives quite some boost for datasets with longer sequences (PandaSet has 8s data, while nuScenes and ZOD have 20s), or when data with 360 camera coverage. You can see the neurad vs neurad-2x performance in our arxiv paper.
Yeah I ended up doing a training run without missing points and it seems to be working well. I also just finished running neurader and neuradest and got a pretty substantial performance boost. Everything seems to be working well so I think I'll close the issue. Thanks!
I am trying to get neurad training on a custom self-driving dataset. My setup consists of surround cameras and a VLS-128 LiDAR. I have a few questions regarding the missing points calculation params. I am looking at the zod_dataparser as reference since I have the same LiDAR.
Questions: