-
_Suggestion for improvement:_
A port of [stochastic gradient based on Manopt](http://www.manopt.org/reference/manopt/solvers/stochasticgradient/stochasticgradient.html) would be useful for problems…
-
Atm we use "strawberry" Gradient Descent Method on the error surface given by the respective error function.
The Question now is: Which Method is best to use?
A Candidate is **Stochastic Gradient …
-
| Team Name | Affiliation |
|---|---|
| TheUnreasonableOne | None |
- Paper: [On the Computational Inefficiency of Large Batch Sizes for Stochastic Gradient Descent](https://openreview.net/pdf?i…
-
There is no example for Stochastic Gradient Descent in Chapter 8. I have tried to write one.
```
print("using minimize_stochastic_batch")
x = list(range(101))
y = [3*x_i + random.randint(-10,…
-
@hoangminhquan-lhsdt @nguyenngoclannhu
Mình gửi review phần tóm tắt chương và các đề mục của chương 2, [version này](https://github.com/hoangminhquan-lhsdt/optimizers/tree/cd48c4b48201e8051d0ea7d1…
-
Currently only stochastic gradient descent is supported, at the very minimum it would be nice to support:
- [ ] RMSProp
- [x] Adam
- [x] SGD with Momentum
- [x] SGD with Nesterov Momentum
-
### Reason/inspiration (optional)
"We would like a new term entry in the `AI` concept for [neural-network](https://www.codecademy.com/resources/docs/ai/neural-networks): Learning Rate Schedule. The…
-
### Reason/inspiration (optional)
"We would like a new term entry in the `AI` concept for [neural-network](https://www.codecademy.com/resources/docs/ai/neural-networks): Adam Optimization. The entr…
-
I feel the so-called **gradient descent** algorithm too narrow-minded.
I think I have already more efficient way.
-
And also optimise the stochastic gradient descent methods.