CVI-SZU / ME-GraphAU

[IJCAI 2022] Learning Multi-dimensional Edge Feature-based AU Relation Graph for Facial Action Unit Recognition, Pytorch code
MIT License
157 stars 39 forks source link

Using the pre-trained models #11

Open ymohamed08 opened 1 year ago

ymohamed08 commented 1 year ago

Hello, I just wanted to ask if you cant provide some instructions on how to use the pre-trained models on new videos to extract the action units. I would like to feed in videos and output the action units. do you i need to retrain the model. what kind of format should the model be fed? Is there already a script you used to input video and output the action units?

akiratsuraii commented 1 year ago

bump,i knew there is a photo detection demo, but I also want to know how to feed videos. I look forward to hearing any answer

ymohamed08 commented 1 year ago

@akiratsuraii would you point me towards the demo please? Can't seem to find it either?

akiratsuraii commented 1 year ago

ME-GraphAU/OpenGraphAU/demo description zone have the command to run with photos

zapan-669 commented 1 year ago

I'm interested with Videos too

Andreas-UI commented 5 months ago

You might want to checkout my repo, I have implemented ME-GraphAU on a video in my project. No changes been made just minor refactor and uses their model to predict the frame when reading the video.

https://github.com/Andreas-UI/ME-GraphAU-Video

zapan-669 commented 5 months ago

Appreciate brother, I’m going to test it soonOn May 2, 2024, at 8:52 PM, Andreas Susanto @.***> wrote: You might want to checkout my repo, I have implemented ME-GraphAU on a video in my project. No changes been made just minor refactor and uses their model to predict the frame when reading the video. https://github.com/Andreas-UI/ME-GraphAU-Video

—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.***>