-
could existing code / models be extended to work in similar way?
https://chat.openai.com/share/5fa19af2-007d-43d7-a47c-367d8f7b33b7
The "EMO: Emote Portrait Alive" paper presents a novel fra…
-
潘老师,能麻烦问下文章中这句话“The initial weights of the STA-DRN are pretrained parameters on UCF101 for the temporal prior of videos, and on CK+ for the spatial prior of facial images.“应该怎么去理解吗?还有这里的预训练权重的文件方便提供参…
-
Thank you for your great work. After reading the paper, I still have some questions. So I hope you or anyone else can answer me if possible.
In the paper, the Residual Deformation Branch learns to …
-
I tested several, but looks the `pose` in **animation region** doesn't work (tried the `pose-friendly` option too).
-
Hello,
Thanks for developing this amazing toolbox. I have started using this with the aim of detecting facial motion units in a single person face of video recordings. It looks like the processing t…
-
Thanks for your nice work! I met a problem while I'm training on the TED dataset (Two 32G GPUs).
```python
File "Thin-Plate-Spline-Motion-Model/train.py", line 55, in train
for x in dataloade…
-
Hi NumesSanguis
I hope it is OK to ask questions here? If not I can go to Blender Artists or what ever is better.
Have you tried using FACSvatar for real time lip sync facial motion capture?
…
-
Hi,
first of all, good job for your works.
I want to integrate the facial recognition with the pir sensor to preserve CPU load.
So I want to start a facial recognition only when a USER_PRESENCE …
-
I wonder if it's possible to integrate Ring-MQTT with double-take to process either the live rtsp:// when a motion is detected or the video file stored in AWS following a motion event (about a 1-2min …
-
Hi, thanks for your great work again.
I am utilizing the SMPL-X-H32 model to process an online video featuring extensive hand movements. When attempting to transform the model's hand rotation output …