TRI-ML / packnet-sfm

TRI-ML Monocular Depth Estimation Repository
https://tri-ml.github.io/packnet-sfm/
MIT License
1.24k stars 243 forks source link

Training on NuScenes #53

Closed Ale0311 closed 3 years ago

Ale0311 commented 4 years ago

Hello!

First and foremost I would like to congratulate you on your great work!

I have read in your paper that you have done some testing on the NuScenes dataset and I was wondering if meanwhile, you have also done some training on it. This is something I am trying to do right now.

I read in the dataset.proto file about the hierarchy of the files and folders, but I could not find anywhere anything about the generation of those .json files.

What have you used to generate them?

I am asking this because the nuScenes dataset also has some .json files but are very differently structured and I don't know how I can generate .json files, having the same structure with the ones that you have, using those from nuScenes.

Thank you in advance for your response!

soheilAppear commented 4 years ago

have you checked the DDAD dataset? Since you talked about the Jason file information might be available in DDAD datasets.

Ale0311 commented 4 years ago

Aren't there, in the DDAD dataset just the .json files for the DDAD dataset? I need those for the nuScenes dataset.

Anyway, thanks for the suggestion, I'll look there as well.

soheilAppear commented 4 years ago

https://github.com/TRI-ML/DDAD

Try to check the above link and its info

iariav commented 4 years ago

I haven't tried it myself yet, but the NuScenes SDK includes a script to convert the Nuscenes data to the KITTI dataset format. and then I suppose you'll be able to just use the KITTI dataloader..

iariav commented 4 years ago

@Ale0311 Just an update - using the NuScenes SDK, I successfully exported NuScenes to a format similar to KITTI and I'm now able to train on NuScenes. It was not very straightforward, but it can be done. After you export the data - the dataset class is very similar to that of KITTI. Goodluck.

pjckoch commented 3 years ago

@iariav Did you train the self-supervised model on Nuscenes? Were you able to achieve reasonable results?

iariav commented 3 years ago

@pjckoch Hi, yes i did. I trained the semi-supervised model (with both the supervised and unsupervised loss). actually, I only pretrained on Nuscenes and tested on my own dataset so I can't really say how the model performed on Nuscenes test (but training seemed to be OK)

pjckoch commented 3 years ago

Hi @iariav

Thanks for getting back to me so quickly! I was asking, because I tried training in self-supervised only mode and found that the network was unable to learn anything. I think I found what was causing the problem though. The nuscenes sample frame rate of 2 Hz seemed to be unsuitable, at least with the default hyperparameters. Loading intermediate sweeps at frame rate 12 Hz seems to produce more reasonable results.

ruili3 commented 3 years ago

Hello!

First and foremost I would like to congratulate you on your great work!

I have read in your paper that you have done some testing on the NuScenes dataset and I was wondering if meanwhile, you have also done some training on it. This is something I am trying to do right now.

I read in the dataset.proto file about the hierarchy of the files and folders, but I could not find anywhere anything about the generation of those .json files.

What have you used to generate them?

I am asking this because the nuScenes dataset also has some .json files but are very differently structured and I don't know how I can generate .json files, having the same structure with the ones that you have, using those from nuScenes.

Thank you in advance for your response!

Hi, may I know how you test your model on the Nuscene dataset? I cant find the evaluation script as well as the validation image ids of the Nuscenes dataset

pjckoch commented 3 years ago

Hi @ruili3 ,

they stated in their paper that they evaluated "on the official NuScenes validation dataset of 6019 front-facing images with ground-truth depthmaps generated by LiDAR reprojection."

You can get the samples belonging to the validation set with the NuScenes-sdk. Have a look at this function: https://github.com/nutonomy/nuscenes-devkit/blob/57889ff20678577025326cfc24e57424a829be0a/python-sdk/nuscenes/utils/splits.py#L189

This gives you the scenes belonging to the validation split. Then you can get the samples belonging to those scenes.

issouker97 commented 3 months ago

am asking this because the nuScenes dataset also has some .json files but are very differently structured and I don't know how I can generate .json files, having the same structure with the ones that you have, using those from nuSce

Hey, would you please share with me how did you manage to generate .json files from your dataset to match the format of nuscenes. Thank u in advance