JustinShenk / fer

Facial Expression Recognition with a deep neural network as a PyPI package
MIT License
351 stars 80 forks source link

Improve Processing time per frame in video while testing #37

Open xanthan011 opened 3 years ago

xanthan011 commented 3 years ago

Firstly, amazing work by the contributors.

I have installed the fer library on google colab (through pip). I wanted to know if there was a way to improve the processing time per frame, my aim is to reduce the processing time while testing, let's say 4 videos at once.

I have already tired multithreading and multiprocessing, both of the methods don't seem to reduce the time for processing. I understand that your model sees each and every frame of the video that is sent, but is there a way to make it parallelly run on more than 1 video so as to reduce the overall execution time?

xanthan011 commented 3 years ago

A sample code of threading that I tried to implement is given below:

import threading

from fer import Video
from fer import FER
import matplotlib.pyplot as plt
import os
import sys

def funk(video_name):
  try:
    videofile = video_name 
    # Face detection
    detector = FER(mtcnn=True)
    # Video predictions
    video = Video(videofile)
    # Output list of dictionaries
    raw_data = video.analyze(detector, display=False) 
  except Exception as e:
    print(f"In video {video_name} there was an error: \n {e}")

videos = ["a","b","c","d"]
for each in videos:
  t2 = threading.Thread(target = funk, args = [each])    
  t2.start()

for x in threads:
     x.join()

If anything can be improved in this code in order to reduce the execution time then please let me know. Other methods are also welcome.