added data.py/process_kitti_seg() function to parse and merge labels from the rgb png images of the segmentation maps
depth is encoded as 1 px = 1cm so I just normalize the inverse depth in tutils.py/get_normalized_depth_t()
input image is loaded as a regular jpg image
Training
I changed jsut a little the trainer.py/setup() interface with self.all_loaders, self.kitty_display_images and self.base_display_images
Added a simple utility in the trainer switch_data(self, to="kitti") or switch_data(self, to="base") that changes self.loaders and self.display_images to point to either the kitti data or the base data we are used to
NOTHING changes in the rest of the code :)
Except that we need to catch here and there potential mismatches between tasks and data keys (when pretraining with kitti, tasks are msd but in the pre-training phase there is no m data)
Experiments
I stopped this one but it shows the pre-training phase works
Add pretraining of the seg and depth heads on vkitti 2
Data
/miniscratch/_groups/ccai/data/vkitti2
(with anabout.txt
of course!)/miniscratch/_groups/ccai/data/jsons
train_kitti.json
andval_kitti.json
thereScene02
as validation datarain
sunset
fog
andmorning
variationsPreprocessing
data.py/process_kitti_seg()
function to parse and merge labels from the rgb png images of the segmentation mapstutils.py/get_normalized_depth_t()
jpg
imageTraining
trainer.py/setup()
interface withself.all_loaders
,self.kitty_display_images
andself.base_display_images
switch_data(self, to="kitti")
orswitch_data(self, to="base")
that changesself.loaders
andself.display_images
to point to either the kitti data or the base data we are used tomsd
but in the pre-training phase there is nom
data)Experiments
omnigan/test_trainer.py