-
Trying to run new video model but watermark filter not being detected.
Traceback (most recent call last):
File "C:\AI\Video\StableVideo\generative_models_main\scripts\sampling\simple_video_sampl…
-
According to the instructions provided here:
Note 2: To test with a custom audio, you need to replace the video_name/video_name.wav and deepspeech feature video_name/deepfeature32/video_name.npy…
-
**Hello! I modify demo_video.py like:**
import argparse
import os
import random
import numpy as np
import torch
import torch.backends.cudnn as cudnn
import gradio as gr
import lavis.tasks …
-
I tested the batch inference results of the llava and llava-next-video models using tensorrt-llm based on the examples/multimodal/run.py file. The parameters for their generate method are the same, as…
-
The multi-frame model generated a fully black video. The single-frame model is working fine.
I am using a P40 to generate. This may be related as these cards cannot work with fp16 precision.
-
**Is your feature request related to a problem? Please describe.**
Contrast enhancement was applied to the derotated video to improve registration performance. It worked when the optimal thresholds f…
-
### Model/Pipeline/Scheduler description
Text-to-video diffusion models enable the generation of high-quality videos given text prompts, making it easy to create diverse and individual content. How…
-
To run SV3D_u on a single image I followed the instruction to download the sv3d_u.safetensors and run python scripts/sampling/simple_video_sample.py but got the errors below and realized xFormers is n…
-
sampling_params = SamplingParams(
temperature=0.1,
top_p=0.001,
repetition_penalty=1.05,
max_tokens=2048,
stop_token_ids=[],
stream=Ture #这里…
-
```
Traceback (most recent call last):
File "D:\stable-diffusion-webui-reForge\modules_forge\main_thread.py", line 37, in loop
task.work()
File "D:\stable-diffusion-webui-reForge\modules_f…