Closed yxftju closed 6 years ago
Hello,
We split images from training set into train/val and used images from 003
, 008
and 010
as test set. Images from folders 011
- 014
are special: they were recorded by original authors to test performance of DNNs for extreme scenarios like left and right turns (see info.txt
file in each of those folders for more information).
As for the accuracy: as mentioned in our paper/slides, the accuracy alone is not a very good indicator of how DNN will perform for navigation purposes in the real world. For example, ResNet-18 CE (i.e. ResNet-18 with standard cross-entropy loss) model achieved around 92% accuracy which was the highest among all the models we trained, but did not do very well on the test trail - its autonomy score was around 88% (see slide 21 from our GTC talk). Unfortunately, we could not find a good, reproducible and generic way to measure autonomy score, it's all pretty subjective.
Thanks for answering my question! So you mean the images from 003,008 and 010 as the test set to get accuracy. However, in the wiki you set 003,008 and 010 as the validation set. i am confused about this. Are the 003,008 and 010 the test set or the validation set? If they are validation set, what dataset is the test set? By the way, I also want to train the TrailNet from the scratch, but it’s difficult for me to create a dataset. So, could you provide the lateral offset dataset? Thank you for your time!
We started with 003,008 and 010 as our test set and 85%/15% as train/val split from our train set. In ideal world you train and fine-tune your model (hyperparameters search etc) on train/val split only and then run the "final" model on the test set only once. In real world you keep changing and improving the models so your test set eventually becomes validation so that's why we decided to call it a validation in our scripts. For every model we did hyperparameter tuning only on train/val split and tested on test set once, but we had a lot of models.
As for the lateral translation dataset: we haven't released the data mostly due to internal review process which is more involved for the data than for the code. However, we released complete instructions and scripts which should allow to collect your own dataset and train translation head of the model. There is nothing special in our dataset - you can collect the similar data in your nearest park or forest.
You can collect data with a rig similar to this one: https://github.com/NVIDIA-Jetson/redtail/wiki/Datasets that uses GoPro 4/5 cameras. Please note - for better results and be less camera dependent (in your runtime), you need to calibrate your rig cameras and un-distort collected footage before training (see the link). This way your robot camera does not need to be the same as dataset collection camera (have same camera model)
Hello: This paper gives me a lot of help for my research. Thanks for it! I have some questions about the process of training and testing the trailnet. I followed the wiki and trained the orientation head. The accuracy of validation dataset is about 86%. I used the folder 011 in the Forest trail dataset as the test dataset and the accuracy is only 77%. It seems too low according the paper and i dont know why. Can you tell me what dataset you chose as your test dataset? Could you give me any suggestments to improve the accuracy?That would help me a lot! Thank you so much!