Hi everyone, is there anyway to train this model to work on lips movement without caring about emotions? I'm trying to use my dataset but it does not have emotion labels. I was wondering if there was anyway to replicate the lips movement from a video to an image without having to label the emotions for every video since they are video taken from youtube and in different languages.
Hi everyone, is there anyway to train this model to work on lips movement without caring about emotions? I'm trying to use my dataset but it does not have emotion labels. I was wondering if there was anyway to replicate the lips movement from a video to an image without having to label the emotions for every video since they are video taken from youtube and in different languages.