JOP-Lee / READ

AAAI2023,implementation of "READ: Large-Scale Neural Scene Rendering for Autonomous Driving", the experimental results are significantly better than Nerf-based methods
https://github.com/JOP-Lee/READ-Large-Scale-Neural-Scene-Rendering-for-Autonomous-Driving
GNU General Public License v2.0
447 stars 55 forks source link

The training mode of "READ" #50

Open booker-max opened 1 year ago

booker-max commented 1 year ago

You used two datasets, namely KITTI and Brno Urban datasets. The KITTI dataset contains three scenes: KITTI Residential, KITTI Road and KITTI City. The Brno Urban dataset contains three scenes: Left side view, Left front side view and right side view, I want to ask about your training mode on the two datasets?

  1. Did you train a model in Kitti based on all the data in the three scenarios, or did you train a model on each of the three scenarios and ended up with three models, I'm confused.
  2. In the same case with the Brno Urban dataset, did you train one model or three models?