JustinShenk / fer

Facial Expression Recognition with a deep neural network as a PyPI package
MIT License
351 stars 80 forks source link

Feature: Analyse only the part of an image/video #24

Closed Owlwasrowk closed 3 years ago

Owlwasrowk commented 3 years ago

I'm currently try to analyse Let's Plays videos. It would be very usefull, if i can provide an additional paremter for the Video analysis with something like a detection box to ensure, that i alwalys analyse only the overlay with the streamer face. I see 2 possible workarounds here:

  1. Perform the video analysis for full images and only return the emotions, where the face box is inside a given box
  2. Perform the video analysis only for the given box to reduce unnecessary calculations.

My actual workaround for 2. is the following snippet:

DETECTION_BOX = {"x_min": 0, "x_max": 150, "y_min": 100, "y_max": 275}
def analyse_emotions(self, detection_box, frequency=None, detector=None):
       #...
        for fno in range(0, total_frames, frequency):
            self.cap.set(cv2.CAP_PROP_POS_FRAMES, fno)
            _, img = self.cap.read()
            detections = self._get_emotions_image(image=img, detection_box=detection_box)
       #...
def _get_emotions_image(self, image, detection_box):
        crop_img = image[
                   detection_box.get("y_min"): detection_box.get("y_max"),
                   detection_box.get("x_min"): detection_box.get("x_max")]
        emotions = self.detector.detect_emotions(crop_img)
        for emotion in emotions:
            original_box = emotion.get("box")
            emotion["box"] = (
                original_box[0] + detection_box.get("x_min"), original_box[1] + detection_box.get("y_min"),
                original_box[2], original_box[3])
        return emotions
JustinShenk commented 3 years ago

Thanks for your suggestion. This is a good idea. Could you please send a PR with