-
hi
i can't run this program
i have an gpu server with ubuntu 20 and cuda 11.6
i don't have permission for increase or decrease cuda version
this is my approach:
```
apt-get update && \
a…
-
Traceback (most recent call last):
File "train.py", line 196, in
train()
File "train.py", line 38, in train
models = create_model(opt,False)
File "/home/mfw/Desktop/zbw/fast_vid2vi…
-
I input a 1280x720 video and the output was 720x1280. I fixed this with some adjustments to the code but the request is to output the same resolution as input.
throb updated
10 months ago
-
Hi, i am trying to understand whether it is possible to determine something like the context seed or context position , in order to make sure that concecutive batches are using the same context in a …
-
cogfun-pose is so powerful
if we can just put in for start_img like i2v it will be game changer
because normal 12v it so random i had to gen like 20 for only 1 use.
but today i try cogfun-pose …
-
-
I get a few messages:
_**I get this (probably unrelated)**_
D:\DProgram Files\Python\Python310\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:65: UserWarning: Specified pro…
-
[Official project said "Local gradio for img2img is on the way!"](https://github.com/luosiallen/latent-consistency-model#-image2image-demos-image-to-image)
I see there is a [LCM img2img for Stable …
-
thank you for the work,can you give your result video demo,I want to compared it to vid2vid,because I use vid2vid to pose transefer.
-
https://github.com/KwaiVGI/LivePortrait has trained a better face vid2vid model from ground up. However, the official repo provides only video-driven generation. That's what SadTalker comes into play.…