TRI-ML / packnet-sfm

TRI-ML Monocular Depth Estimation Repository
https://tri-ml.github.io/packnet-sfm/
MIT License
1.23k stars 242 forks source link

What does context mean? #171

Closed vogoriachko closed 3 years ago

vogoriachko commented 3 years ago

I am trying to finetune model for my own dataset and using ImageDataset as dataset class. Could you please explain what is split file in .yaml file? For example, I have downloaded KITI tiny and in there for image_02 images you use corresponding in image_03. They look like real stereo examples, since images are not equal . So the question: What is correct way to make a split file? Is it correct to make the same file as in KITI_tiny with monocular camera: image_1 image_2 image_2 image_3 ........................... imagen image{n+1}

porwalnaman01 commented 3 years ago

Hello there, did you solve this issue? I am also using ImageDataset but when I try to run the script, it reads the validation files but do not read the training files. My dataset is a folder containing .jpg images. Thanks in advance!

Config file that I am using :

model:
    name: 'SelfSupModel'
    optimizer:
        name: 'Adam'
        depth:
            lr: 0.0002
        pose:
            lr: 0.0002
    scheduler:
        name: 'StepLR'
        step_size: 30
        gamma: 0.5
    depth_net:
        name: 'DepthResNet'
        version: '50pt'
    pose_net:
        name: 'PoseNet'
        version: ''
    params:
        crop: 'garg'
        min_depth: 0.0
datasets:
    augmentation:
        image_shape: (192, 640)
    train:
        batch_size: 4
        dataset: ['Image']
        path: ['/disk1/dan/datasets/vgg-faces/train']
        split: ['train_split.txt']

    validation:
        dataset: ['Image']
        path: ['/disk1/dan/datasets/vgg-faces/val']
        split: ['val_split.txt']

checkpoint:
    filepath: '/disk1/dan/Naman/packnet-sfm-0.1.2/experiments1'
    monitor: 'abs_rel_pp_gt'
    monitor_index: 0
    mode: 'min'