jeffheaton / encog-java-core

http://www.heatonresearch.com/encog
Other
742 stars 268 forks source link

Parameter batchSize in constructor StochasticGradientDescent #223

Closed drseb closed 7 years ago

drseb commented 8 years ago

Hi,

can you please clarify why the parameter batchSize in the constructor of StochasticGradientDescent is not used?

public StochasticGradientDescent(final ContainsFlat network, final MLDataSet training, final int batchSize, final double theLearnRate, final double theMomentum) {

I came along this, because I currently have a hard time to understand why the training does not use more CPUs. I have around 20,000 training instances and 30 CPUs available, but the training will only use 5 CPUs. Any help is greatly appreciated.

Thanks, Sebastian

jeffheaton commented 7 years ago

I had to make several fixes to SGD, so this will work better in the 3.4. Currently SGD is not multi-threaded, but hope to add some options for that soon. The regular backpropagation/rprop has better support for multithreading at this point.

jeffheaton commented 7 years ago

Included in 3.4, but not multithreaded at this point. Multithreaded SGD is a bigger change, esp. for full online.