Open Keshav-Kumar-Agrawal opened 1 week ago
Mini Project Final Presentation 2023 on “Efficient Human Violence Detection"
Video Capture Path
Mini project final report
video sample
Algorithm for key frame extraction
Method Used : Transfer learning using MobileNet architecture
Step 1 Libraries: Import necessary libraries, including Tensorflow, Keras and OpenCV Converting to frames: Step 2 Converting the videos in the dataset to frames for pre-processing and training. Step 3 Pre – processing : Load the frames and pre-process the images to ensure uniformity in size and color. Step 4 Splitting of data: Split the total data – set into training (80%) & testing (20%) sets. Load the MobileNet model : Step 5 load the mobilenet model and define additional layers for classification.The MobileNet architecture is suitable for the project as it is lightweight and has low computational cost. Step 6 Model Training: Train the model on the training set while validating the model on the validation set.
Code used to extract frames:
import cv2 import os
count=0 for files in os.listdir('./videos/'): cap=cv2.VideoCapture(os.path.join('./videos/',files)) success, image=cap.read()
if not os.path.exists('./frames'): os.mkdir('./frames') print(success) while success: success, image=cap.read()
if not success:
break
cv2.imwrite('./frames/'+str(count)+'.jpg',image)
if cv2.waitKey(10)==27:
break
count+=1
True
Led the gathering of 2000 video datasets, extracted frame features, and scaled for MobileNet. For higher classification, we used advanced optimization (Adam, binary cross-entropy), selected MobileNet deliberately, and innovated by adding extra layers. Training and validation were completed successfully, demonstrating a commitment to robust video classification.