This is my first pull request ever so hopefully it's not too far off from usable. I'm having some difficulties with testing my code. I plugged in my adam optimizer to the example and the trained network guessed 0.0 four times. While the default code within the example folder for a MLP worked as planned.
For the overall structuring of the code I did my best to follow the RMSPROP optimizer.
EDIT: Fixed some math errors and it seems to operate as expected now. Both the cost is decreasing and it correctly passed the example test. I also improved the memory usage of the previous version by using the in place util.
This is my first pull request ever so hopefully it's not too far off from usable. I'm having some difficulties with testing my code. I plugged in my adam optimizer to the example and the trained network guessed 0.0 four times. While the default code within the example folder for a MLP worked as planned.
For the overall structuring of the code I did my best to follow the RMSPROP optimizer.
EDIT: Fixed some math errors and it seems to operate as expected now. Both the cost is decreasing and it correctly passed the example test. I also improved the memory usage of the previous version by using the in place util.