It's probably a good idea (like for most of our other implementations) to follow the design used in the Python API. In the case of GradientDescentOptimizer it looks like they've built an abstract Optimizer-class that GradientDescentOptimizer (and a few other optimizers) inherit from.
I'm not sure if this is strictly necessary for the 0.1.0 Milestone, however having at least one optimizer/trainer would be great.
Python documentation: https://www.tensorflow.org/versions/r0.10/api_docs/python/train.html#GradientDescentOptimizer
Python implementation: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/training/gradient_descent.py#L27
It's probably a good idea (like for most of our other implementations) to follow the design used in the Python API. In the case of
GradientDescentOptimizer
it looks like they've built an abstractOptimizer
-class thatGradientDescentOptimizer
(and a few other optimizers) inherit from.