Open Stasiche opened 2 years ago
Model I am using (ListenAttendSpell, Transformer, Conformer ...): ConformerLstmModel
I hadn't got enough memory to use GPU, but I wanted to try one thing. So I set "trainer=cpu". I got "RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper__index_select)" on https://github.com/openspeech-team/openspeech/blob/main/openspeech/decoders/lstm_attention_decoder.py#L140
As I can see the reason is that it's not safe to do this this way: https://github.com/openspeech-team/openspeech/blob/main/openspeech/decoders/lstm_attention_decoder.py#L243 Because my card is still installed, so torch.cuda.is_available() == True. But my model is on CPU.
Thank you for letting us know. We'll check. cc. @upskyy
Environment info
Information
Model I am using (ListenAttendSpell, Transformer, Conformer ...): ConformerLstmModel
I hadn't got enough memory to use GPU, but I wanted to try one thing. So I set "trainer=cpu". I got "RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper__index_select)" on https://github.com/openspeech-team/openspeech/blob/main/openspeech/decoders/lstm_attention_decoder.py#L140
As I can see the reason is that it's not safe to do this this way: https://github.com/openspeech-team/openspeech/blob/main/openspeech/decoders/lstm_attention_decoder.py#L243 Because my card is still installed, so torch.cuda.is_available() == True. But my model is on CPU.