preprocess_model = CropAndExtract(sadtalker_paths, args.device)
audio_to_coeff = Audio2Coeff(sadtalker_paths, args.device)
animate_from_coeff = AnimateFromCoeff(sadtalker_paths, args.device)
if torch.cuda.device_count() > 1:
print(f"Using {torch.cuda.device_count()} GPUs for parallel processing.")
preprocess_model = nn.DataParallel(preprocess_model, device_ids=[1, 2, 3])
audio_to_coeff = nn.DataParallel(audio_to_coeff, device_ids=[1, 2, 3])
animate_from_coeff = nn.DataParallel(animate_from_coeff, device_ids=[1, 2, 3])
else:
print("Using a single GPU.")
preprocess_model.to(args.device)
audio_to_coeff.to(args.device)
animate_from_coeff.to(args.device)
but problem is in some class model of you have process in specific cuda and load many things in it, so can't run multiple GPUs to gen faster. Anybody can help me ?
I can't run your model to multiple GPUs
I have changed like that