ShifuML / shifu

An end-to-end machine learning and data mining framework on Hadoop
https://github.com/ShifuML/shifu/wiki
Apache License 2.0
249 stars 109 forks source link

Apply data shuffle in training for mini-batch gradient update #744

Open junshiguo opened 3 years ago

junshiguo commented 3 years ago

For mini-batch gradient update, we need to shuffle training data, so that actual training inputs for each iteration are different and random. This should benefit model performance given same iterations of training.

Applicable models include NN, LR and WDL.