elbow-jason / annex

Artificial Neural Networks in Elixir
MIT License
55 stars 3 forks source link

Add Annex.Optimizer behavoiur and Annex.Optimizer.SGD #29

Closed elbow-jason closed 5 years ago

elbow-jason commented 5 years ago

This PR adds the Annex.Optimizer behaviour.

So far, the Annex.Optimizer behaviour only has 1 function train/3. The train/3 of Annex.Optimizer is the exact same function specification as the Annex.Learner train/3 callback.

It may be a good idea to create an Annex.Learner.Trainer behaviour. This would DRY up the train/3specification that is currently in two places but must, in fact, be the same specification.

Additionally, this PR adds Annex.Optimizer.SGD which is an Annex.Optimizer implementation for running mini-batch (or not batched at all) stochastic gradient descent.

elbow-jason commented 5 years ago

Also, the number of epochs required for training a network has decreased. The term epoch was formerly, and mistakenly, meant to reflect a count of the number of times the network had been trained on 1 inputs and the matching 1 labels. This concept was incorrect. An epoch is the count of iterations through the (maybe batched) dataset.