OpenTalker / SadTalker

[CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
https://sadtalker.github.io/
Other
11.12k stars 2.08k forks source link

Multiple GPUs #912

Open mf0212 opened 1 month ago

mf0212 commented 1 month ago

I can't run your model to multiple GPUs

I have changed like that

preprocess_model = CropAndExtract(sadtalker_paths, args.device)
audio_to_coeff = Audio2Coeff(sadtalker_paths, args.device)
animate_from_coeff = AnimateFromCoeff(sadtalker_paths, args.device)

if torch.cuda.device_count() > 1:
    print(f"Using {torch.cuda.device_count()} GPUs for parallel processing.")
    preprocess_model = nn.DataParallel(preprocess_model, device_ids=[1, 2, 3])
    audio_to_coeff = nn.DataParallel(audio_to_coeff, device_ids=[1, 2, 3])
    animate_from_coeff = nn.DataParallel(animate_from_coeff, device_ids=[1, 2, 3])
else:
    print("Using a single GPU.")
preprocess_model.to(args.device)
audio_to_coeff.to(args.device)
animate_from_coeff.to(args.device)

but problem is in some class model of you have process in specific cuda and load many things in it, so can't run multiple GPUs to gen faster. Anybody can help me ?