-
### Idea Contribution
- [X] I have read all the feature request issues.
- [X] I'm interested in working on this issue
- [X] I'm part of GSSOC organization
### Explain feature request
Deep Learning …
-
Let me clarify the confusion on the orthogonal directions obtained with the optimal alpha in slide 66/127 of the lecture "Stochastic gradient descent".
Please read this while looking at the slide.
…
-
Hi,
Thanks for releasing the code for active-qa.
After browsing the code, I did not find Monte-Carlo Sampling in the training stage. It seems that each training instance consists of only one 「…
-
I can see that Stochastic Gradient Descent has already been implemented. But linear regression works using simple gradient descent. What are the challenges to implementing SGD for Linear Regression.
…
-
Most of methods in the list will be implemented in the order.
- inference for Sparse Gaussian process regression (based on JMLR 2005 "A unifying view of sparse approximate Gaussian process regression…
-
artificial neuron - by Mcculloch and Pitts 1943
Perceptrons 感知器 by Rosenblatt 1958
backpropagation 反向传播 1974
feedforward
deep feedforward neural networks - modern techniques
Stee…
-
> [!NOTE]
> If you have a request to support a specific method, or would like to see priority of one of the listed methods, please open a separate issue, so it won't get buried in this thread. Base…
-
Hi Angus, have you ever considered fitting the GMM through a gradient based method instead of EM, such as mentioned here: https://stats.stackexchange.com/questions/64193/fitting-a-gaussian-mixture-mod…
-
This is an interesting stochastic optimizer with some nice theoretical guarantees for convex problems. Would be interesting to compare to the others we have implemented already.
https://papers.nips.c…
-
Currently only stochastic gradient descent is supported, at the very minimum it would be nice to support:
- [ ] RMSProp
- [x] Adam
- [x] SGD with Momentum
- [x] SGD with Nesterov Momentum