I find that :
def _resume_checkpoint(self, resume_path):
"""
Resume from saved checkpoints
:param resume_path: Checkpoint path to be resumed
"""
self.logger.info("Loading checkpoint: {} ...".format(resume_path))
checkpoint = torch.load(resume_path)
self.start_epoch = checkpoint['epoch'] + 1
self.monitor_best = checkpoint['monitor_best']
self.model.load_state_dict(checkpoint['state_dict'])
self.optimizer.load_state_dict(checkpoint['optimizer'])
if self.with_cuda:
for state in self.optimizer.state.values():
for k, v in state.items():
if isinstance(v, torch.Tensor):
state[k] = v.cuda(self.gpus) .-------------->self.gpus is dict. And can not train with gpus
self.train_logger = checkpoint['logger']
self.config = checkpoint['config']
self.logger.info("Checkpoint '{}' (epoch {}) loaded".format(resume_path, self.start_epoch))
when I fine tune on the pretrained modle ?
it can not train with multi -gpus.
I find that : def _resume_checkpoint(self, resume_path): """ Resume from saved checkpoints
could you fix the bug?
thank you very much