JOP-Lee / READ

AAAI2023,implementation of "READ: Large-Scale Neural Scene Rendering for Autonomous Driving", the experimental results are significantly better than Nerf-based methods
https://github.com/JOP-Lee/READ-Large-Scale-Neural-Scene-Rendering-for-Autonomous-Driving
GNU General Public License v2.0
447 stars 55 forks source link

Procedure for custom dataset training #22

Closed vinodrajendran001 closed 1 year ago

vinodrajendran001 commented 1 year ago

Hi,

I would like to train from scratch using my own dataset.

Can you pls provide details on what are the inputs required and how to start the training?

Thanks.

JOP-Lee commented 1 year ago

It's very simple. You just need a sequence of pictures to do it.

  1. Use metashape to obtain camera.xml, pointcloud.ply
  2. Place the above files and photos in the Data folder, for example, Data/image/xx.png, Data/camera.xml Data/pointcloud.ply
  3. Change the folder address in configs/paths_example.yaml and load net_ckpt/texture_ckpt model address in configs/train_example.yaml, if any.
  4. Run the code: python train.py --config configs/train_example.yaml --pipeline READ.pipelines.ogl.TexturePipeline --crop_size 256x256
vinodrajendran001 commented 1 year ago

Thanks @JOP-Lee for the the info. Do you have any small dataset (may be couple of images) with their corresponding camera.xml and pointcloud.ply? If so, could you please share it. I would like to visualize those and generate a large scale data for my usecase without using Metashape.

JOP-Lee commented 1 year ago

jiuzhai.zip

@vinodrajendran001

h8c2 commented 1 year ago

Hi, I would like to ask about the coordinate definition of the view matrix since I don't want to use metashape to get the poses. Could you give me some suggestions? @JOP-Lee @vinodrajendran001