cjlin1 / simpleNN

BSD 3-Clause "New" or "Revised" License
48 stars 16 forks source link

The difference between Keras SGD and tf.train.MomentumOptimizer #7

Open djshen opened 4 years ago

djshen commented 4 years ago

There was a discussion about whether we can replace the tf.compat.v1.train.MomentumOptimizer with tf.keras.optimizers.SGD. To find the difference, I read the document and trace the source code.

The update rule of SGD is: vt+1 ← αvt - η∇f(θt) θt+1 ← θt + vt+1

And that of MomentumOptimizer is: vt+1 ← αvt - ∇f(θt) θt+1 ← θt + ηvt+1

The difference is that the learning rate is multiplied at the first step in SGD while it is multiplied at the second step in MomentumOptimizer.

If the learning rate η is a constant, the two formula are mathematically equivalent but there might be some floating point errors. However, if the learning rate is changing, the results will be different. Hope this could answer the question.

BTW, I find that the update rule in the slides is the same as SGD instead of MomentumOptimizer.