MoonBlvd / tad-IROS2019

Code of the Unsupervised Traffic Accident Detection paper in Pytorch.
MIT License
164 stars 39 forks source link

Model was trained, how to run anomaly detection on custom video file? #41

Open mwy001 opened 1 year ago

mwy001 commented 1 year ago

Have followed the steps to train the model on the HEVI features and ego-motion files provided by the author. Now I have:

Then I need to run anomaly detection on a custom video file downloaded from the internet. How to do this?

Currently my understanding is to follow the below steps to process the video: Detection: MaskRCNN Tracking: DeepSort Dense optical flow: FlowNet2.0 Ego motion: ORBSLAM2

Then refer the code in run_fol_for_AD.py & run_AD.py for the anomaly detection?

Thanks.

Hardik7674 commented 1 year ago

Have followed the steps to train the model on the HEVI features and ego-motion files provided by the author. Now I have:

  • fol_epoch_091_loss_0.0113.pt
  • ego_pred_epoch_091_loss_0.0180.pt

Then I need to run anomaly detection on a custom video file downloaded from the internet. How to do this?

Currently my understanding is to follow the below steps to process the video: Detection: MaskRCNN Tracking: DeepSort Dense optical flow: FlowNet2.0 Ego motion: ORBSLAM2

Then refer the code in run_fol_for_AD.py & run_AD.py for the anomaly detection?

Thanks.

Have you run anomaly detection on your custom video file? Can you please tell me how to generate features (specially ego_motion using orbslam_2)for your video file?

thank you

mwy001 commented 1 year ago

Have followed the steps to train the model on the HEVI features and ego-motion files provided by the author. Now I have:

  • fol_epoch_091_loss_0.0113.pt
  • ego_pred_epoch_091_loss_0.0180.pt

Then I need to run anomaly detection on a custom video file downloaded from the internet. How to do this? Currently my understanding is to follow the below steps to process the video: Detection: MaskRCNN Tracking: DeepSort Dense optical flow: FlowNet2.0 Ego motion: ORBSLAM2 Then refer the code in run_fol_for_AD.py & run_AD.py for the anomaly detection? Thanks.

Have you run anomaly detection on your custom video file? Can you please tell me how to generate features (specially ego_motion using orbslam_2)for your video file?

thank you

No, not yet

sbjshxbxijs commented 1 year ago

hi, I am a student from china, can you please share the HEV-I dataset with me? Thank you very much!

sbjshxbxijs commented 1 year ago

Have followed the steps to train the model on the HEVI features and ego-motion files provided by the author. Now I have:

* fol_epoch_091_loss_0.0113.pt

* ego_pred_epoch_091_loss_0.0180.pt

Then I need to run anomaly detection on a custom video file downloaded from the internet. How to do this?

Currently my understanding is to follow the below steps to process the video: Detection: MaskRCNN Tracking: DeepSort Dense optical flow: FlowNet2.0 Ego motion: ORBSLAM2

Then refer the code in run_fol_for_AD.py & run_AD.py for the anomaly detection?

Thanks.

hi, I am a student from china, can you please share the HEV-I dataset with me? Thank you very much! My email address is 3020189856@qq.com

trThanhnguyen commented 1 year ago

Hi @sbjshxbxijs you don't need to have the raw HEVI dataset to train your model. Just download the feature files in the README and extract them out (they become train and val), then edit 'data_root' in 'fol_ego_train.yaml' to where those train and val are stored. Run the train scripts and you will get similar result. For more infomation, check out issue #1 :) Hope this help.

sbjshxbxijs commented 1 year ago

I don't have RGB picture(the frames), in other word I don't have access to the HEVI dataset. If I only download the feature, I don't think I will get the result.

trThanhnguyen commented 1 year ago

@sbjshxbxijs Are you sure? Have you try it? I did train successfully with just those files and got similar loss value as this issue's starter.

sbjshxbxijs commented 1 year ago

Did you only use the festure files but not the HEVI dataset during training?

awais019 commented 1 year ago

Hello can you help me with the checkpoints directory because when I run the proejct it gives the error of no checkpoint directory.

trThanhnguyen commented 1 year ago

Hi @Awais-019, You only need to make a new directory in the current working directory that is matched with the configuration in the fol_ego_train.yaml file, for me it is checkpoint_dir: "checkpoints/fol_ego_checkpoints". It is the directory that the checkpoint will be stored, so your job is to prepare it first.

awais019 commented 1 year ago

Thanks but what about best ego model it also gives error.

awais019 commented 1 year ago

Kindly take a look at this error as well raise ValueError("num_samples should be a positive integer " ValueError: num_samples should be a positive integer value, but got num_samples=0

awais019 commented 1 year ago

Check it is giving this error FileNotFoundError: [Errno 2] No such file or directory: 'checkpoints/ego_pred_checkpoints/epoch_080_loss_0.001.pt'

MoonBlvd commented 1 year ago

hi, I am a student from china, can you please share the HEV-I dataset with me? Thank you very much!

Hi I don't have the authority to share the raw videos with you. However I have uploaded the train and validation features I extracted from the HEV-I paper : https://github.com/MoonBlvd/tad-IROS2019#hev-i-dataset

These should be enough to train a FOL model.

MoonBlvd commented 1 year ago

@Awais-019 I think you didn't put the data in the directory that your code is looking at. I saw @sbjshxbxijs posted a similar issue.

MoonBlvd commented 1 year ago

Check it is giving this error FileNotFoundError: [Errno 2] No such file or directory: 'checkpoints/ego_pred_checkpoints/epoch_080_loss_0.001.pt'

did you run train_pred_ego.py first? You won't have those checkpoints if you haven't ran ego motion training.