macournoyer / neuralconvo

Neural conversational model in Torch
777 stars 346 forks source link

Parameter updation happening after every training example #33

Closed vikram-gupta closed 8 years ago

vikram-gupta commented 8 years ago

Hi @macournoyer

We are dividing the training examples into mini-batches while training. However, it looks like that we are updating the parameters of the network after every training example.

self.encoder:updateGradParameters(self.momentum)
self.decoder:updateGradParameters(self.momentum)
self.decoder:updateParameters(self.learningRate)
self.encoder:updateParameters(self.learningRate)
self.encoder:zeroGradParameters()
self.decoder:zeroGradParameters()

Are we intentionally doing some kind of online training ? I guess we can average the gradients over all the samples of a mini-batch and then update the parameters?

Thanks, Vikram

macournoyer commented 8 years ago

Check out #29

vikram-gupta commented 8 years ago

Thanks @macournoyer