zhaoweicai / mscnn

Caffe implementation of our multi-scale object detection framework
405 stars 211 forks source link

评估问题 #20

Closed hk003 closed 8 years ago

hk003 commented 8 years ago

看作者的名字感觉作者是个中国人,水平太高不好意思打扰了,希望看到的国人同胞帮我看看: 论文提到测试的图像7518张是没有标签的,采用分离验证的方法,那么论文用作者训练好的模型在测试集来测试生成的标注怎么评价呢 ? 看了KITTI数据集上的评估函数devement kit 的方法,主函数发邮件是怎么回事?测试集标签可以生成吗 ?

GBJim commented 8 years ago

嗨 @hk003 原文的Section 5寫道:

"In total,7,481 images are available for training/validation, and 7,518 for testing. Since no ground truth is available for the test set, we followed [5], splitting the trainval set into training and validation sets. In all ablation experiments, the training set was used for learning and the validation set for evaluation"

意思是說既然KITTI提供的7518張Test Image不提供Ground Truth, 那就參照一篇相關研究的作法, 將7481張 Training Image拆解為兩組, 一組拿來訓練,一組拿來評估model

希望有幫到你

baiyancheng20 commented 8 years ago

@GBJim HI, I divide the training dataset into training and val datasets and use the training dataset to train my detector. How do I test the detection results on my desktop? Do I use 'evaluate_object.cpp' to test my detector performance on the val dataset? I want to find good parameters and then submit my detector's results. Thank you!

GBJim commented 8 years ago

@baiyancheng20 Seems like you got your answer on issue https://github.com/zhaoweicai/mscnn/issues/22

GBJim commented 8 years ago

@hk003

The following paragraph is the splitting method from the thesis: 3D Object Proposals for Accurate Object Class Detection. MSCNN adopt the same approach.

Since the test ground-truth labels are not available, we split the KITTI training set into train and validation sets (each containing half of the images). We ensure that our training and validation set do not come from the same video sequences, and evaluate the performance of our bounding box proposals on the validation set.

Once you got your new training data, follow the training steps on the README and replace the training data window files. The window files are the annotation files containing bounding boxes information.

twangnh commented 6 years ago

Hi! @GBJim Thanks for sharing your work, I'm wondering how to ensure that our training and validation set do not come from the same video sequences

GBJim commented 6 years ago

@MrWanter You can simply pre-select some videos as training set and the rest for validations.