-
-
Title is pretty specific, but basically I am hoping that we can get a feature to merge the audio from the source to the final video. ComfyUI has this in their vid2vid workflows, but honestly your exte…
-
Hi,
first thank you for your impressive project and sharing it with us.
I have a question about the feature embedding.
Is the feature embedding scheme the same as the feature encoding in Pix2P…
-
Below might improve the result:
```diff python
- write_video(output, video_result[2:], fps=fps)
+ write_video(output, video_result[4:], fps=fps)
```
at https://github.com/cumulo-autumn/Stream…
-
We made a tutorial of training few shot vid2vid network and styleGAN, hope you like it!
You can use styleGAN and its latent code to generate few-shot-vid2vid input data with spacial-continuity, which…
-
I find this,but I try to use it , can not make it .
[AnimateLCM-I2V](https://huggingface.co/wangfuyun/AnimateLCM-I2V) support, big thanks to [Fu-Yun Wang](https://github.com/G-U-N) for providing me…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
Hi it's could be awesome if vid2vid have "i…
-
ran this code on colab : !bash ./scripts/face/train_512.sh
And its showing me the following error. Please help me to resolve this
Traceback (most recent call last):
File "train.py", line 148…
-
I'm running the pose script (./scripts/pose/train_256.sh). This seems to be crashing due to incorrectly calculating n_gpus.
The particular line at fault is:
./models/vid2vid_model_G.py: …
-
Where can I get the pretrained model for this vid2vid model? Or is there any other way, such as using Densepose, Openpose for getting a pretrained model that could be used here for vid2vid. I am noo…