issues
search
torch
/
optim
A numeric optimization package for Torch.
Other
197
stars
152
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
getParameters problem
#166
mhmtsarigul
opened
6 years ago
0
Fixed the link to the Adam research paper
#165
ProGamerGov
closed
6 years ago
1
Optim does not update weights on big MLP network
#164
viktorheli
closed
7 years ago
3
In SGD, why set default dampening to momentum?
#163
YurongYou
opened
7 years ago
1
adding gradient clipping support for SGD
#162
allanj
opened
7 years ago
0
optim.checkgrad dimensions dependent
#161
bermanmaxim
opened
7 years ago
0
Problem using adamax under lua5.2
#160
Mathieu-Seurin
opened
7 years ago
0
FTML: Follow the Moving Leader
#159
szhengac
opened
7 years ago
1
Slight modification of adam.lua causing different training losses with the same seed
#158
szhengac
opened
7 years ago
0
centered rmsprop
#157
alirezag
opened
7 years ago
2
Confusion Matrix for Full Convolution Network
#156
varghesealex90
opened
7 years ago
0
Torch - Using optim package with CNN
#155
syamprasadkr
opened
7 years ago
0
config.dampening in optim.sgd?
#154
sumo8291
opened
7 years ago
0
No plot windows when running optim.plot from inside a docker
#153
rremani
opened
7 years ago
0
Is it possible to optimize a table of modules together?
#152
squidszyd
opened
7 years ago
2
Update adam.lua
#151
Amir-Arsalan
opened
7 years ago
1
Update algos.md
#150
Amir-Arsalan
closed
7 years ago
1
Question on rmsprop implementation
#149
syoungbaak
closed
7 years ago
1
Problem Solved. Setting learningRate and learningRateDecay in ADAM does not work.
#148
muhanzhang
closed
7 years ago
0
<optim.lbfgs> function value changing less than tolX
#147
Naruto-Sasuke
opened
7 years ago
0
Checking gradients on GPU?
#146
juesato
closed
7 years ago
1
Feature Request: Adasecant
#145
ltrottier
closed
5 years ago
0
EVE: stochastic gradient descent with feedback
#144
ketranm
opened
8 years ago
2
is optim module use cublas
#143
austingg
closed
8 years ago
1
Fixed misspelling
#142
ibmua
closed
8 years ago
1
ConfusionMatrix: Stochastic bug with batchAdd
#141
akhilsbehl
closed
8 years ago
1
How to get top-5 or top-3 accuracy by ConfusionMatrix?
#140
arashno
opened
8 years ago
1
Optim method runs on multi-core by default?
#139
giahung24
opened
8 years ago
0
Fix polyinterp to let lbfgs wtih lswolfe work on GPU
#138
DmitryUlyanov
closed
8 years ago
1
Update intro.md
#137
Atcold
closed
8 years ago
3
Fix typos
#136
wydwww
closed
8 years ago
1
Enable local doc for inline help
#135
Atcold
closed
8 years ago
1
move optim doc from nn
#134
hughperkins
closed
8 years ago
0
Allow setting learningRate to 0.
#133
jonathanasdf
closed
8 years ago
0
Prevent displaying of plots and documentation for it
#132
codeAC29
closed
8 years ago
1
Spelling mistake.
#131
korymath
closed
8 years ago
1
Reverted to zero mean squared values init
#130
iassael
closed
8 years ago
5
What is the correct way continue training after xth epoch?
#129
euwern
opened
8 years ago
1
Please add ability to plot only specified categories.
#128
tlkvstepan
opened
8 years ago
0
Copy C1 value, in case it is a Tensor reference
#127
gcinbis
closed
8 years ago
1
Reduce numerical errors.
#126
gcinbis
closed
8 years ago
1
multiple plots with optim.logger
#125
YitzhakSp
closed
8 years ago
2
One-line Logger initialisation
#124
Atcold
closed
8 years ago
1
Add Differential Evolution
#123
soumith
closed
8 years ago
0
Add LearningRateDecay to Adam
#122
Cadene
closed
8 years ago
1
Documentation and code refactoring
#121
Atcold
closed
8 years ago
2
There is no L1 weightDecay in Torch.
#120
amirkhango
opened
8 years ago
1
add weight decay support to adamax
#119
chenb67
closed
8 years ago
1
add weight decay support to adam
#118
gcheron
closed
8 years ago
1
Init rmsprop mean square state 'm' with 1 instead 0
#117
andreaskoepf
closed
8 years ago
0
Next