Open chenjx1005 opened 2 years ago
I have create the following config but I didn't validate it yet. @VitorGuizilini-TRI @RaresAmbrus is this config correct ?
wrapper:
recipe: wrapper|default
max_epochs: 1
arch:
model:
file: depth/DepthFormerModel
warp_context: [ -1,1 ]
match_context: [ -1 ]
motion_masking: True
matching_augmentation: False
freeze_teacher_and_pose: 45
networks:
transformer:
recipe: networks/transformer|depthformer
mono_depth:
recipe: networks/mono_depth_res_net|default
depth_range: [0.1,100.0]
multi_depth:
recipe: networks/multi_depth_res_net|depthformer
pose:
recipe: networks/pose_net|default
losses:
reprojection:
recipe: losses/reprojection|default
smoothness:
recipe: losses/smoothness|default
supervision:
recipe: losses/supervised_depth|l1
evaluation:
depth:
recipe: evaluation/depth|kitti_resize
optimizers:
multi_depth:
recipe: optimizers|adam_20_05
mono_depth:
recipe: optimizers|adam_20_05
pose:
recipe: optimizers|adam_20_05
transformer:
recipe: optimizers|adam_20_05
datasets:
train:
recipe: datasets/kitti_tiny|train_selfsup_mr
labels: [pose]
context: [ -1,1 ]
validation:
recipe: datasets/kitti_tiny|validation_mr
labels: [ depth,pose ]
context: [ -1,1 ]
save:
recipe: save|depth_splitname
Seems correct, but I'll upload a training config file for DepthFormer soon.
One minor comment: you don't need pose in the training split, since it is also learned.
The model fails if there no pose in the batch as in #15
That's not an issue with the config file, it's in the model, you'll have to comment out parts relating to stereo
, so it doesn't look for multiple cameras and their relative pose.
Hi @VitorGuizilini-TRI, it will be definitely helpful if the training config file for DepthFormer (on KITTI) is released. I tried @Houssem-25 config but failed to reproduce the result in paper.
Thanks for your great work!
Thank you for this work, @VitorGuizilini-TRI could you upload the self-calibration inference config file?
Do you have any updates on the release of the depthformer training config @VitorGuizilini-TRI ? Thanks for sharing this repo!
I am interested by the training config for DepthFormer and PackNet too.
There are only inference config for packnet and depthformer. Could u pls provide training configs for them, so we can reproduce the results in your paper?
Thank you.