DegardinBruno / human-self-learning-anomaly

Code for the paper "Human Activity Analysis: Iterative Weak/Self-Supervised Learning Frameworks for Detecting Abnormal Events", IJCB 2020
Other
42 stars 13 forks source link

val_notes.csv test_notes.csv #9

Closed xrlaxowlsx closed 2 years ago

xrlaxowlsx commented 2 years ago

I tried to experiment with this project with small data. However, during the normalization process, the val_notes.csv test_notes.csv file could not be obtained. According to Readme, it can be obtained by running the normalize_notes file, but it was each csv file I got. Can you tell me how to get val_notes and test_notes, and how they are filled?

Are the annotations in normalize_notes the annotations in UBI_FIGHTS?

DegardinBruno commented 2 years ago

Hey @xrlaxowlsx, thank you for your interest!

We assume that each video has a corresponding .csv file, where one line corresponds to one frame. Obviously, each video will have different lengths right? Since we are working with video-level (weakly-supervised learning) and with C3D which has a frame length of 16, we normalize the videos and annotations to have the same clip length and csv files are split into clips of 480 frames (16 frames x 30 fps, default settings).

Are the annotations in normalize_notes the annotations in UBI_FIGHTS?

Yes! The normalize_notes.py is ready for UBI-Fights dataset.

xrlaxowlsx commented 2 years ago

Oh, thank you for confirming!

Now what I'm curious about is this. I wonder where the val_notes and test_notes csv files required for ws_ss are generated. I ran all the courses in utils and the created data is frames, sub_vides, dest_csv, root_features, and destSEG. (root_feature was obtained by modifying input_C3D and executing it directly. :)) Test_notes and val_notes needed to run ws_ss could not be created.

Do i still have more procedures to do here?

DegardinBruno commented 2 years ago

For the val_notes.csv and test_notes.csv, after running normalize_notes.py for the validation set and test set (where each folder has a csv file for each video) the val_notes.csv (for example with the validation set) will be generated to the dest_csv path that you want (argument in the normalize_notes.py).

Feel free to ask anything! It is my pleasure!

xrlaxowlsx commented 2 years ago

First, thank you for your kindness. I'm really sorry that I keep asking the same question, but I'll write down my situation. The following article is a variable in each file I understand.

The previous number is in the order of execution.

  1. video_frames.py

    • root_videos : Use UBI_FIGHTS dataset(fight, normal video)
    • root_conv_videos : Converted videos(resized video)
    • root_frames : All video's frame
  2. normalize_videos.py

    • root_frames : All video's frame (Have F and N folder)
    • root_sub_videos : 30 frames video
    • erase_frames : I'm not set this flag
  3. normalize_notes.py

    • root_csv : Use Annotation of UBI_FIGHTS(Not seperated by F and N)
    • dest_csv : Destination of preprocessed Annotation(I couldn't get test_notes.csv, val_notes.csv! Do I have to share it myself?)
  4. input_C3D.py

    • root_video : sub_videos made in process 2(normalize_videos.py)
    • root_features : result of this processing(Seperated F and N)
    • root_C3D_dir : Use C3D(feature extraction. I extracted it for every thousand videos.)
    • root_frames : All video's frame made in process 1(video_frames.py)
    • csv_C3D : All C3D information(Filled almost 530,000 line, like /var/lib/docker/exhd/root_features/normal/N_7_0_0_1_0/0/000000.fc6-1 format)
  5. normalize_C3D.py

    • root_C3D : C3D features. root_features in process 4
    • dest_SEG : destination of temporal segments
    • sub_videos : I already tried this at process 2

and my dataset and directories

image

I wonder if the preprocessed data is in this shape. And I still can't get val_notes.csv and test_notes.csv. When i run the normalize_notes.py, it just creates a huge amount of csv files as shown in the picture below. Not test_notes.csv and val_notes.csv. When I saw the code, it seems that there is no code to generate test_notes.csv and val_notes.csv.

image

Have a nice day!!

DegardinBruno commented 2 years ago

No problem at all! My bad sorry, there are some details that I did not remember and I wasn't understanding your question! After creating that bunch of csv files (which are related to the clips of each video) you need to concatenate them into one one csv file only! Format example in one line of the csv file for weak annotation (label corresponding if the segment comes from a positive or negative video):

path_to_temporal_segment_file, label

Format example in one line of the csv file for strong annotation (label corresponding to the mean of 16 frames of course):

path_to_raw_C3D_segment_file, label

Concatenation is necessary, I was forgetting that part!

bashayerAlsalman commented 2 years ago

Hello @DegardinBruno Thank you for your usual support

I were able to generate multiple CVS files corresponding to each sub video. the output as the following image. Screenshot from 2021-12-07 10-21-36

1- is it correct? 2- how can concatenate the files into single one? 3- how to chose the validation and the test values? 4- what is the value of the expected label for the positive and negative? 0,1?

thank you a lot.

xrlaxowlsx commented 2 years ago

First, I was unable to proceed with the study due to graphics card issues. Therefore, please use it for reference only.

  1. Yes. That's right!
  2. You have no choice but to do it yourself.
  3. It should be selected so that it does not overlap with the data used in train.
  4. Positive is labeled 0 and negative is labeled 1.

Importantly, val_notes and test_notes must have only labels, no paths. Have a nice day!

bashayerAlsalman commented 2 years ago

@xrlaxowlsx Thank you!! really appreciate your answer and support! Thank you a lot