SeanNaren / deepspeech.torch

Speech Recognition using DeepSpeech2 network and the CTC activation function.
MIT License
260 stars 73 forks source link

change model from gpu to cpu #65

Open fanlamda opened 7 years ago

fanlamda commented 7 years ago

I wish to train the model on GPU,but predict with cpu,so I tried using cudnn.convert(self.model, nn). But it seems something in the model remains Cuda form.

Is there any method to solve this problem? Any advice is welcome.

SeanNaren commented 7 years ago

In order to switch back to CPU mode you would also need to call self.model:float() as well as cudnn.convert(self.model, nn).

fanlamda commented 7 years ago

Yes,I used self.model:float() After cudnn.convert(), but I found cudnn.batchBRNNReLU remains unchanged. It comes error like 'unknown object cudnn.batchBRNNReLU'

SeanNaren commented 7 years ago

Ah this is my fault there is no CPU version of cudnn.BatchBRNNReLU. I'll have to modify the DeepSpeechModel class to take this into consideration; the alternative is to use the rnn package seqBRNN which will allow conversion, which will involve having to train a new model

SeanNaren commented 7 years ago

I'm training a smaller model on AN4 that is CPU based (but training on GPU with a few hacks). Will add this to the pre-trained networks once finished.

saurabhvyas commented 7 years ago

I dont have a gpu , I am still learning , Is there any small pretrained model that will work with only cpu(core i5 of my laptop and raspberry pi 3 ? )