Closed woiza closed 4 years ago
sorry, this toolkit does't support multi Gpus training because Pytorch distributed learning( dataparallel or distributed dataparallel) need to split input tensors in batch dimension, while in this toolkit the input is a dict, so we need to modify or wrapper the standard dataparallel to solve it. It will be added in future.
Hi, I have 2 GPUs with 8GB of memory each. Training your "TextVDCNN" model fails (out of memory) and only one GPU is used. Is it possible to use your toolkit with 2 GPUs (data parallelism)?