-
Hi! I don't know if i got it right reading documentation and examples. However my question is: in order to train a neural network in full bartch mode (that is, using all the available instancdes), is …
-
_Suggestion for improvement:_
A port of [stochastic gradient based on Manopt](http://www.manopt.org/reference/manopt/solvers/stochasticgradient/stochasticgradient.html) would be useful for problems…
-
@kangk9908
https://github.com/kcarnold/cs344-exam-23sp/blob/6a5024bc438f6db811ce74682ee7ba1fc4684112/u02-sa-learning-rate/SLO.md?plain=1#L1
From how I read the SLO from unit 2, I think it more rel…
-
Hello,
I'm encountering an issue using `BatchGradientDescent` with `param_constrainers`.
Here is an example of the issue:
``` python
from pylearn2.optimization.batch_gradient_descent import BatchGr…
-
Hi all,
I am currently trying to understand how to do semi-supervised learning with gpytorch based on this paper: https://arxiv.org/pdf/1805.10407.pdf
I would like to setup a mini-batch approach…
-
Great implementation from the paper!
The paper used mini-batch gradient descent with batch_size of 10. But I can't seem to find it in the training step. It seems that you are trying 1 obs at a time…
-
I may be missing something but it looks as if you're doing Full Gradient Descent (i.e. using the entire batch) in the SGD class. SGD should just use a single value selected at random.
-
On contour plot, probably. Sampling far away from minimum should show most minibatch gradients pointing in the right-ish direction. While closer to the minimum, not as good. Can do this with varyi…
-
### Expected behavior
I hope to build my own photonic neural network and successfully start training it
### Actual behavior
In fact, once I ran it, I got the following error and was unable to train…
-
I have been using your code (thank you very much for the nice repo), and I wonder if line 85 of som.py should be removed. The line in question simply has:
`delta.div_(batch_size)`
But I suspect …