Tandon-A / emotic

PyTorch implementation of Emotic CNN methodology to recognize emotions in images using context information.
MIT License
134 stars 47 forks source link

loading model #20

Closed syedameersohail closed 2 years ago

syedameersohail commented 2 years ago

Hi Abhishek!

I have downloaded the pretrained model but here there are three models in this case, but I try to load them and get the error could you help me regarding this please and how do i predict once I have got the model loaded.

image

Tandon-A commented 2 years ago

@syedameersohail

Hello Sohail,

You need to import the Emotic class. Since, you are using Colab, prior to model weights loading, you can add a cell, paste the Emotic class code there and execute that. This would ensure that the Emotic class representation is in the memory so that the weights can be appropriately loaded.

To predict on test images, you would need to have the bounding box coordinates of the target person. You can use a person detection model (a pretrained Yolo could work) or self-annotate the person bounding box coordinates if you don't have them already.

Once you have the bounding box coordinates, you need to prepare a body image (which is the image of the target person) and context image (original image), and send these to the body and context model respectively. The obtained body and context features then need to be passed to the emotic model to get the categorical and continuous emotion predictions.

You can take a look at the inference mode provided in the code. This would do all the necessary steps. Inference code link It needs to know the input image path and the person box coordinates. This is done through the inference file. You can check the sample inference file.

If you don't have the bounding box coordinates, you can try using the Yolo model. I have prepared a script for the same.

Regards, Abhishek

Tandon-A commented 2 years ago

@syedameersohail

Please feel free to reopen the issue.