fjchange / object_centric_VAD

An Tensorflow Re-Implement of CVPR 2019 "Object-centric Auto-Encoders and Dummy Anomalies for Abnormal Event Detection in Video"
MIT License
95 stars 30 forks source link

tensorflow svm does not work well #12

Open jiangdizhao opened 4 years ago

jiangdizhao commented 4 years ago

For about 1 month, I tried to reproduce your result, however, I always failed to get it. The following concepts confused me:

  1. In the paper, it said, " For each object, we obtain two image gradients , one representing the change in motion from frame t−3 to frame t and one representing the change in motion from frame t to frame t + 3 ". In my opinion, it means (temporal gradient ) : motion_1 = abs( frame[t] - frame[t-3]); motion_2 = abs(frame[t + 3] - frame[t]);

    In your implementation, it is: motion_1 = [ sobel_x( frame[t-3] ) , sobel_y( frame[t-3] ) ] motion_2 = [sobel_x( frame[t+3] ) , sobel_y( frame[t+3] ) ] This is actually spatial gradient of frame[t -3] and frame[t + 3]

  2. I simply implement the SVM using hinge loss and adam optimizer based on tensorflow. After training, each of the 10 SVMs can successfully predict features produced by 3 AutoEncoders correctly. For example, for feature labelled by 3, all 10 SVMs will give result as [-2.3, -1.6, -3,5, 1.2, -1.7, -5.4, -2.5, -0.9, -2.7, -1.1] However, when I feed test frames from ShangHaiTech dataset, patches like people riding a bike, people jumping, people fighting, still got positive scores, so -max(scores) is negative, it regards all patches as normal.