Hi, I want try a big batch_size so I need to multi-gpu, and I use the code about multi-gpu in train.py
, But I found a problem in paraller.py
core/utils/parallel.py", line 136, in _worker
output = module(*(list(input) + target), **kwargs)
TypeError: can only concatenate list (not "tuple") to list
and I found some solutions in your References ,
link: https://github.com/zhanghang1989/PyTorch-Encoding/issues/116
but still some mistakes in validation
outputs = self.model(image)
And I use denseaspp model to train the data, And Could you give me some ideas about that. Thank you
Hi, I want try a big batch_size so I need to multi-gpu, and I use the code about multi-gpu in train.py , But I found a problem in paraller.py
and I found some solutions in your References , link: https://github.com/zhanghang1989/PyTorch-Encoding/issues/116 but still some mistakes in validation
outputs = self.model(image)
And I use denseaspp model to train the data, And Could you give me some ideas about that. Thank you