Closed pradheepram closed 1 year ago
Has the problem been resolved? My prediction also was wrong.
@pradheepram @LTNdeep Thanks for your interest in this repo! The results should be correct. For the crash dataset, positive and negative videos are collected from different sources such that their visual quality/lighting/background, etc., are different, which leads the model to potentially overfitting to the scene (learn spurious correlation from accident-irrelevant cues). It would be promising to solve this issue from a technical perspective in the future :)
I use the run_train_test.sh file to test and it only shows 20 results.Where can I set this parameter?
I use the run_train_test.sh file to test and it only shows 20 results.Where can I set this parameter?
What do you mean by 20 results?
If you mean the number of output scores for each video, you may need to carefully double-check the self.n_frames
and self.fps
in your dataloader, see here for reference: src/DataLoader.py#L22.
thanks.I have solved the problem.
so i did a test run of crash data set on the given pretrained models of RCNN and Usting vgg16 models. i got the same output as in the paper that is related to this project for the crash data set. I have the question that in the testing output of visualisations i find that all positive examples are almost always at a 1.0 or 0.0 probablity and all negative samples are in a similar state which shouldn't be the case as the probablity should be a fluctuating value. i don't think i have run the model wrongly since i got same results as in the paper.
video-level AP=0.99542 Average Precision= 0.9699, mean Time to accident= 4.7382 Recall@80%, Time to accident= 4.25 Mean aleatoric uncertainty: 0.013649 Mean epistemic uncertainty: 0.000099
The sample 20 test vis i got are: