Closed ucsky closed 2 years ago
Hi,
thanks for spotting this issue! We were not aware about the problem of using batch_transfer callbacks in DP mode. We will fix it in newer versions of tsl. By now, you can remove the callbacks and move the code inside into downstream method, like the training/validation steps. We used these callbacks just to manipulate tensors in minibatch.
However, I tested it in DDP mode (strategy='ddp'
) and it works as it is now. For sure you can leverage multiple GPUs for imputation!
Hello,
I try to run a variant of
run_imputation.py
with multiple GPU but I got the following error using dp strategy:Please do you known if it is possible to fix this or if it is possible to take advantage of multiple GPUs for imputation?