Keshav-Kumar-Agrawal / Keshav.github.io

0 stars 0 forks source link

EFFICIENT HUMAN VIOLENCE DETECTION #1

Open Keshav-Kumar-Agrawal opened 1 week ago

Keshav-Kumar-Agrawal commented 1 week ago

Led the gathering of 2000 video datasets, extracted frame features, and scaled for MobileNet. For higher classification, we used advanced optimization (Adam, binary cross-entropy), selected MobileNet deliberately, and innovated by adding extra layers. Training and validation were completed successfully, demonstrating a commitment to robust video classification.

Keshav-Kumar-Agrawal commented 1 week ago

c-17.pptx

Mini Project Final Presentation 2023 on “Efficient Human Violence Detection"

Keshav-Kumar-Agrawal commented 1 week ago

VideoCapturePath.txt

Video Capture Path

Keshav-Kumar-Agrawal commented 1 week ago

Mini project final report.pdf

Mini project final report

Keshav-Kumar-Agrawal commented 1 week ago

video sample.docx

video sample

Keshav-Kumar-Agrawal commented 1 week ago

Algorithm for key frame extraction

  1. Load a pre-trained violence detection model.
  2. Set the key frame interval to a certain value, which in this case is 10.
  3. Set up output directories for violent and non-violent frames.
  4. Open the input video file using OpenCV.
  5. Initialize a frame counter to keep track of the number of frames processed.
  6. Set the previous key frame to None.
  7. While the video is still open, read the next frame.
  8. If there are no more frames to read, break out of the loop.
  9. Increment the frame counter.
  10. If this is the first frame, set the previous key frame and current key frame to this frame.
  11. If this is a key frame (i.e., frame_count is a multiple of key_frame_interval), calculate the mean absolute error (MAE) between the current frame and the previous key frame using cv2.absdiff().
  12. If the MAE is greater than 0.05, set the previous key frame and current key frame to this frame.
  13. Otherwise, continue to the next frame.
  14. Classify the current key frame as violent or non-violent using the pre-trained model.
  15. Save the current key frame in the appropriate output directory based on its classification.
  16. Release the input video file and close all windows.
  17. Print out statistics, including the total number of frames processed, the number of violent frames saved, and the number of non-violent frames saved.
Keshav-Kumar-Agrawal commented 1 week ago

Method Used : Transfer learning using MobileNet architecture

Step 1 Libraries: Import necessary libraries, including Tensorflow, Keras and OpenCV Converting to frames: Step 2 Converting the videos in the dataset to frames for pre-processing and training. Step 3 Pre – processing : Load the frames and pre-process the images to ensure uniformity in size and color. Step 4 Splitting of data: Split the total data – set into training (80%) & testing (20%) sets. Load the MobileNet model : Step 5 load the mobilenet model and define additional layers for classification.The MobileNet architecture is suitable for the project as it is lightweight and has low computational cost. Step 6 Model Training: Train the model on the training set while validating the model on the validation set.

Keshav-Kumar-Agrawal commented 1 week ago

Code used to extract frames:

import cv2 import os

count=0 for files in os.listdir('./videos/'): cap=cv2.VideoCapture(os.path.join('./videos/',files)) success, image=cap.read()

creating a folder if it doesn't exist

if not os.path.exists('./frames'): os.mkdir('./frames') print(success) while success: success, image=cap.read()

  if not success:
    break

  cv2.imwrite('./frames/'+str(count)+'.jpg',image)

  if cv2.waitKey(10)==27:
    break
  count+=1

True