Closed Kyla06 closed 7 years ago
Hi,
The model is trained on the caltech dataset. And its properties might be different from your data. e.g. the pedestrian in caltech are in small scale and often unclear.
The results might be improved after training a model on your own data.
I was trying to increase the number of outputs like the following way or by decreasing the opts.nms_overlap_thres, but the number of bounding boxes I got didn't significantly increase. Is there any way to increase the number of outputs that might increase the recall? Thanks.
Hi,
Might help when incearsing the number of proposals via changing the opts.nms_overlap_thres and opts.after_nms_topN.
Then meet another error. Could you please have a look?
Preparing training data...total (2) = nonempty (2) + empty (0) Starting parallel pool (parpool) using the 'local' profile ... connected to 12 workers. Done. Preparing validation data...total (2) = nonempty (2) + empty (0) Done. Error using arrayfun Sparse Arrays are not supported. See SPFUN.
Error in proposal_train_caltech>generate_random_minibatch (line 195) empty_image_inds = arrayfun(@(x) sum(x.bbox_targets{1}(:, 1)==1) == 0, image_roidb, 'UniformOutput', true);
Thanks very much! @zhangliliang
@gy1874 You might check the data preparing part. It seems that the code only detect two image in the imdb.
I changed the opts.nms_overlap_thres to 0.3 and the opts.after_nms_topN to 300, but the number of output proposals did not increase a lot. Maybe I should train the model use my dataset. I have another question about the BF training. I read the code and found that only negative samples were added in the next training stage, but the trained forest in the previous stage seemed not to be used any more. So what is the effect of multiple stages training? Thanks a lot! @zhangliliang
Actually the nms_overlap_thres should be increase to 0.9 or 1 for retaining the proposals.
Multiple stage training is used for mining the hard negatives step by step, which is helpful for the robustness of the classifier in the final stage. This technique is widely used and also called as bootstrapping.
I followed the instruction in "Testing Demo" to see the detection results on my own dataset, but I got very low recall disagree with the results in the paper. Did I get something wrong? Thanks a lot. Here is the ROC I got: