Currently, the FastDepthDataset that is used to load data during training is a derived class from DepthDataset in the bayesian-visual-odometry repo. This was done early on because at the time there was no quick way to create my own data loader without modifying Guo's repo or writing repetitive code. It's now gotten to the point where it is holding back progress though, and will make it especially hard to integrate real-world data into the training pipeline. The refactor will perform the following:
[ ] Create a base RoshiDataset that will load raw files from either the simulation or the real world
[ ] Create a derived FastdepthDataset that will perform Fastdepth specific preprocessing
With this structure, any separate repo that is training a model off of simulation/real-world data only has to create its own derived class from RoshiDataset and perform preprocessing. This will be particularly useful when we expand to many other neural nets, such as segmentation, loop closure, optical flow, etc.
Currently, the
FastDepthDataset
that is used to load data during training is a derived class fromDepthDataset
in the bayesian-visual-odometry repo. This was done early on because at the time there was no quick way to create my own data loader without modifying Guo's repo or writing repetitive code. It's now gotten to the point where it is holding back progress though, and will make it especially hard to integrate real-world data into the training pipeline. The refactor will perform the following:RoshiDataset
that will load raw files from either the simulation or the real worldFastdepthDataset
that will perform Fastdepth specific preprocessingWith this structure, any separate repo that is training a model off of simulation/real-world data only has to create its own derived class from
RoshiDataset
and perform preprocessing. This will be particularly useful when we expand to many other neural nets, such as segmentation, loop closure, optical flow, etc.