Before we can implement polyak-ruppert averaging for C++'s gradient descent function, we need the parameter struct to hold this value.
Currently, the moe.optimal_learning.python.cpp_wrappers.optimization.GradientDescentParameters wrapper takes in a num_steps_averaged field but does not pass it on to C++. The field is stored locally in Python so the data is not lost.
We need to:
add num_steps_averaged to the C++ GradientDescentParameters struct
update docs to describe averaging
remove the local copy stored in the Python wrapper object
BLOCKING: #390
Before we can implement polyak-ruppert averaging for C++'s gradient descent function, we need the parameter struct to hold this value.
Currently, the
moe.optimal_learning.python.cpp_wrappers.optimization.GradientDescentParameters
wrapper takes in anum_steps_averaged
field but does not pass it on to C++. The field is stored locally in Python so the data is not lost.We need to:
num_steps_averaged
to the C++GradientDescentParameters
struct