smallcorgi / Anticipating-Accidents

Anticipating Accidents in Dashcam Videos (ACCV 2016)
87 stars 25 forks source link

Running the demo #5

Open CindyHXH opened 6 years ago

CindyHXH commented 6 years ago

Hi. I was trying to run the demo:

python accident.py --model ./demo_model/demo_model

It outputs errors like:

NotFoundError (see above for traceback): Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

Key model/LSTM/lstm_cell/bias not found in checkpoint [[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]

May I ask what is the exact file and path for your model? Thank you.

smallcorgi commented 6 years ago

Hi @CindyHXH,

This was the different version bug.

I had uploaded the new model, and you can try it again.

Thanks

CindyHXH commented 6 years ago

Hi @smallcorgi ,

Thanks for sharing the new model. It works fine. I am working on some related topics with different video data.

I want to simply test some new videos. May I ask how you extract these features and can you share that part of code? Or can you kindly guide me how to test one new video with your current code? Thank you very much.

smallcorgi commented 6 years ago

Hi @CindyHXH,

My model of features extractor is missing, so you can directly use Faster-RCNN with MSCOCO pre-train model to extract the feature.

But the accident model needs to retrain again, and the performance will not better than the demo model.

CindyHXH commented 6 years ago

Hi @smallcorgi,

Thank you very much for your reply. Based on your paper, it extracted several features e.g. appearance features, improved dense trajectory (IDT) feature and etc. From my side, I used YOLO detector to extract appearance features. And I am interested in how you compute and get these labels information in your features file (batch_*.npz) such as labels [[1. 0.] [1. 0.] [1. 0.] [1. 0.] [0. 1.] [1. 0.] [0. 1.] [0. 1.] [0. 1.] [1. 0.]] I want to leverage part of your work to make progress towards a potential paper. Is it possible for you to share with me your feature extraction code via email (hxhcindy@gmail.com)? Thank you so much.

smallcorgi commented 6 years ago

Hi @CindyHXH , I use one hot encoding to encode the label, and 0/1 mean no accident/ has accident. Sorry, I feature extraction code is missing, maybe you can do that by yourself. For example, I used Faster-RCNN based on VGG16, and I can get the final detection result and the feature map of "fc7" layer. And then, you can dump the same format as my example.

CindyHXH commented 6 years ago

Hi @smallcorgi,

Sounds awesome. Thank you so much.

VivekMaran27 commented 6 years ago

@CindyHXH /@smallcorgi

If possible, can you please share the feature extraction code (vivekmaran27@gmail.com). I can try to write one, but it ll be very helpful and save me a lot of time If I can use the already existing one.

dmitryshendryk commented 5 years ago

Hi @smallcorgi,

I have a question regarding the features format for the npz. all_data['det'] - Is detected objects from the frames, the bounded boxes ? So you just put it into the vector, correct? You collect from Faster RCNN 20 cars for example and store here, correct? all_data['ID'] - improved dense trajectory ? all_data['data'] - appearance features ? all_data['labels'] - this is as from above is just (accident/ no accident)

Also regarding the pipeline, when do inference. First collect the features, appearance features, improved dense trajectory, bboxes of objects and store into vector. Then pass to the LSTM. Correct?