TRI-ML / packnet-sfm

TRI-ML Monocular Depth Estimation Repository
https://tri-ml.github.io/packnet-sfm/
MIT License
1.23k stars 242 forks source link

Please provide Instructions for training on custom dataset #173

Open porwalnaman01 opened 3 years ago

porwalnaman01 commented 3 years ago

Hello there ~ I am trying to train your proposed model on my own dataset which just has images in a folder, but it is not able to read any training images but is able to read validation images. Can you please provide training instructions for a dataset just containing images. Thanks in advance!

My Config file for a dataset consisting of folder with just images:

model: name: 'SelfSupModel' optimizer: name: 'Adam' depth: lr: 0.0002 pose: lr: 0.0002 scheduler: name: 'StepLR' step_size: 30 gamma: 0.5 depth_net: name: 'DepthResNet' version: '50pt' pose_net: name: 'PoseResNet' version: '50pt' params: crop: 'garg' min_depth: 0.0 datasets: augmentation: image_shape: (192, 640) train: batch_size: 4 dataset: ['Image'] path: ['/disk1/dan/datasets/vgg-faces/train'] split: ['train_split.txt'] repeat: [2] validation: dataset: ['Image'] path: ['/disk1/dan/datasets/vgg-faces/val'] split: ['val_split.txt']

checkpoint: filepath: '/disk1/dan/Naman/packnet-sfm-0.1.2/experiments1' monitor: 'abs_rel_pp_gt' monitor_index: 0 mode: 'min'

ivasiljevic commented 2 years ago

Hi @porwalnaman01, you can try a similar config to the omnicam dataset (train_omnicam.yaml) where you can rename the images with leading zeros and edit the config (e.g. for omnicam this was split: ['{:09}']).