Open loretoparisi opened 1 month ago
You can set the seed, but there's no way to set the style because the training process hasn't been enhanced in this aspect. We will try to improve this in the future. For seed settings, please refer to cli_demo.
You can set the seed, but there's no way to set the style because the training process hasn't been enhanced in this aspect. We will try to improve this in the future. For seed settings, please refer to cli_demo.
Thank you, in the cli_demo I only see the generator to seed the rnd, but not a seed image
video = pipe(
prompt=prompt,
num_videos_per_prompt=num_videos_per_prompt, # Number of videos to generate per prompt
num_inference_steps=num_inference_steps, # Number of inference steps
num_frames=49, # Number of frames to generate,changed to 49 for diffusers version `0.31.0` and after.
use_dynamic_cfg=True, ## This id used for DPM Sechduler, for DDIM scheduler, it should be False
guidance_scale=guidance_scale, # Guidance scale for classifier-free guidance, can set to 7 for DPM scheduler
generator=torch.Generator().manual_seed(42), # Set the seed for reproducibility
).frames[0]
Oh, the image is indeed generated directly from noise. We didn't write the content to control this part, so the impact should be very, very small
the
ah correct, so in any case you mean you would need to add a image
parameter as for the Image2Image Diffusers's pipeline here https://huggingface.co/docs/diffusers/v0.30.2/en/api/pipelines/auto_pipeline#diffusers.AutoPipelineForImage2Image
Feature request / 功能建议
Any plan to add support to style reference, seed image and control parameters?
Motivation / 动机
This feature has been speculated in this article:
Other T2V and T2I models have extensive support to style, image and control parameters yet.
Your contribution / 您的贡献
-