I am trying to do inference based on a model trained with 2 GPUs, and with distributed_run=True and fp16_run=False set in hparams.py. The command used to train the model is python -m multiproc train.py -o [output path] -l [log path].
Training works well, but when I load the model for inference, the following error message occurs. I do not know what the problem is. (The same problem occurs when I do inference on a model that was trained with only one GPU.)
I tried looking for different ways to solve this, but was not able to. I would appreciate it if someone could suggest possible solutions. Thanks in advance.
I am trying to do inference based on a model trained with 2 GPUs, and with distributed_run=True and fp16_run=False set in hparams.py. The command used to train the model is python -m multiproc train.py -o [output path] -l [log path].
Training works well, but when I load the model for inference, the following error message occurs. I do not know what the problem is. (The same problem occurs when I do inference on a model that was trained with only one GPU.)
I tried looking for different ways to solve this, but was not able to. I would appreciate it if someone could suggest possible solutions. Thanks in advance.