-
Expression Transfer:
"GANimation: Anatomically-aware Facial Animation from a Single Image" (Pumarola et al., 2018)
"MeshTalk: 3D Face Animation from Speech using Cross-Modal Disentanglement" (Rich…
-
### Feature request / 功能建议
Any plan to add support to style reference, seed image and control parameters?
### Motivation / 动机
This feature has been speculated in [this](https://naomiclarkson0.medi…
-
Hello!
I'm looking into adding diffusion based image generation into a video game for fairly simple stuff like decals. The game will be running under some pretty heavy resource constraints. If I …
-
I have trained the kplanes-dynamic model with the proposed dnerf synthetic dynamic dataset. However, as there is no clear instruction on how to render the video for the same, I am struggling to render…
-
How long videos can we generate? what would be the best model combinations ?
-
Thanks for the great work as always!
For the original Unianinmate, I used `python inference.py --cfg configs/UniAnimate_infer_long.yaml` to generate long videos,
how can I use that feature in …
-
wondering if this can be extended to image to video generation with image condition, and controllable camera motion?
-
Hello,
First of all thank you for bringing this amazing tool! I was wondering if there is any chance of integrating open-source LMM models like for example https://huggingface.co/Qwen/Qwen2-VL-7B-…
-
import os
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
model = Qwen2VLForConditionalGeneration.from_pretrai…
-
我严格参照https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct-GPTQ-Int4的内容进行部署,结果都是提示
Traceback (most recent call last):
File "//abc.py", line 1, in
from transformers import Qwen2VLForConditionalGen…