Open ptits opened 1 month ago
We've labeled the issue so that users with such a need can come in and check your code. However, we may not merge this into the main branch, as torch.backends.cudnn.benchmark = False
will slow down the generation process. That being said, thanks for the contribution!
I have cut the code to:
torch.manual_seed(seed)
# If you are using GPUs, set the seed for all GPUs
if torch.cuda.is_available():
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed) # for multi-GPU setups
And fixed SEED still working fine for my tests
please, add seed parameter
for tests it is so important
add seed to gradio demo and all other calls
I did it myself - just added this code to my gradio :
use it if needed - I am not familiar with pull requests
def generate_text_to_video(prompt, temp, guidance_scale, video_guidance_scale, text_seed, steps, video_steps): set_seed(int(text_seed)) with torch.no_grad(), torch.cuda.amp.autocast(enabled=True if model_dtype != 'fp32' else False, dtype=torch_dtype): ...... ......
###################################################### def set_seed(seed: int):
Set seed for Python's built-in random module