Open fanlamda opened 7 years ago
In order to switch back to CPU mode you would also need to call self.model:float()
as well as cudnn.convert(self.model, nn)
.
Yes,I used self.model:float() After cudnn.convert(), but I found cudnn.batchBRNNReLU remains unchanged. It comes error like 'unknown object cudnn.batchBRNNReLU'
Ah this is my fault there is no CPU version of cudnn.BatchBRNNReLU
. I'll have to modify the DeepSpeechModel
class to take this into consideration; the alternative is to use the rnn
package seqBRNN
which will allow conversion, which will involve having to train a new model
I'm training a smaller model on AN4 that is CPU based (but training on GPU with a few hacks). Will add this to the pre-trained networks once finished.
I dont have a gpu , I am still learning , Is there any small pretrained model that will work with only cpu(core i5 of my laptop and raspberry pi 3 ? )
I wish to train the model on GPU,but predict with cpu,so I tried using cudnn.convert(self.model, nn). But it seems something in the model remains Cuda form.
Is there any method to solve this problem? Any advice is welcome.