josephjaspers / blackcat_tensors

Matrix-Vector Library Designed for Neural Network Construction. cuda (gpu) support, openmp (multithreaded cpu) support, partial support of BLAS, expression template based implementation PTX code generation identical to hand written kernels, and support for auto-differentiation
12 stars 4 forks source link

Neural_Networks: Add Optimizers [ADAM, Momentum, etc] #51

Closed josephjaspers closed 4 years ago

xinsuinizhuan commented 4 years ago

Can Add Optimizers firstly, to reduce the train epochs?

josephjaspers commented 4 years ago

I can try to implement them next, however I am not super knowledgeable about ADAM so I will have to learn how it works. I can easily add momentum and some of the simpler optimizers to start though

xinsuinizhuan commented 4 years ago

OK. Thank you very much. I test your newest code every data that you updata,quietly.

josephjaspers commented 4 years ago

initial work: (not in master) https://github.com/josephjaspers/blackcat_tensors/commits/add_optimizers

xinsuinizhuan commented 4 years ago

initial work: (not in master) https://github.com/josephjaspers/blackcat_tensors/commits/add_optimizers

Thank you very much. It's too late, you must go to sleep. I test it, somethings error with it, the loss stay at a hight level,and the predict output is so bad, as: error.txt

josephjaspers commented 4 years ago

It actually does work, for the LSTM example the learning rate was too high.

(Before it was .03 now its .003). It works with better accuracy than before now!

xinsuinizhuan commented 4 years ago

It actually does work, for the LSTM example the learning rate was too high.

(Before it was .03 now its .003). It works with better accuracy than before now!

same net struct and same learning_rate, and same epochs, but the new code is wrose then before. my sturct: Neural Network architecture: LSTM: inputs: 960 outputs: 1024 LSTM: inputs: 1024 outputs: 512 LSTM: inputs: 512 outputs: 216 FeedForward: inputs: 216 outputs: 192 Logistic: inputs: 192 outputs: 192 Output_Layer: inputs: 192 outputs: 192 my learning rate: 0.001 my epochs: epoches = 5000 the result:

before version: current epoch: 4999 Batch index: 10000 loss: [0.073458] predict MAPE loss: 0.0178553 single_predict output predict data------------------------------------ [0.342952, 0.331565, 0.344145, 0.313788, 0.339053, 0.363001, 0.349856, 0.319215, 0.337973, 0.333021, 0.308690, 0.349658, 0.360213, 0.321968, 0.348099, 0.350301, 0.332935, 0.352472, 0.373385, 0.345374, 0.331337, 0.372727, 0.354728, 0.329701, 0.336493, 0.374931, 0.345098, 0.324367, 0.357304, 0.316814, 0.362576, 0.316121, 0.335063, 0.326771, 0.327515, 0.341742, 0.386912, 0.382068, 0.396946, 0.383211, 0.349470, 0.353930, 0.367731, 0.327579, 0.330633, 0.329314, 0.330768, 0.345761, 0.317209, 0.345449, 0.345215, 0.364815, 0.327770, 0.330963, 0.336601, 0.344984, 0.299147, 0.326700, 0.369856, 0.336990, 0.357805, 0.345081, 0.380037, 0.368887, 0.357391, 0.348674, 0.297832, 0.320218, 0.354936, 0.330757, 0.330443, 0.368000, 0.324340, 0.367170, 0.387133, 0.333894, 0.333704, 0.325356, 0.304129, 0.344713, 0.333453, 0.364044, 0.348917, 0.324528, 0.331773, 0.322457, 0.337069, 0.345514, 0.338658, 0.349513, 0.357718, 0.321644, 0.340884, 0.357588, 0.995158, 0.320776, 0.349578, 0.318848, 0.307689, 0.310863, 0.345109, 0.359386, 0.317287, 0.346555, 0.342095, 0.368950, 0.317245, 0.329368, 0.373585, 0.362938, 0.367108, 0.334626, 0.352733, 0.368143, 0.342585, 0.304410, 0.321962, 0.355201, 0.336678, 0.352226, 0.368904, 0.334853, 0.302150, 0.349645, 0.320489, 0.321991, 0.341718, 0.351938, 0.315358, 0.333289, 0.331443, 0.336796, 0.320470, 0.375631, 0.387909, 0.341988, 0.315833, 0.349916, 0.319812, 0.365827, 0.317233, 0.383178, 0.336225, 0.337645, 0.355401, 0.346442, 0.347053, 0.333808, 0.334982, 0.315580, 0.344343, 0.364117, 0.352618, 0.370648, 0.331177, 0.331983, 0.352985, 0.302567, 0.330048, 0.368982, 0.294063, 0.370114, 0.356124, 0.325751, 0.372448, 0.363055, 0.342777, 0.346053, 0.321725, 0.314300, 0.337262, 0.345282, 0.003251, 0.348394, 0.347103, 0.359142, 0.313090, 0.363430, 0.366305, 0.348948, 0.369255, 0.319359, 0.351623, 0.319092, 0.378411, 0.328176, 0.356937, 0.317264, 0.318078, 0.336041, 0.389203, 0.338443]

no-momentum version: current epoch: 4999 Batch index: 10000 loss: [0.128336] predict MAPE loss: 0.538509 single_predict output predict data------------------------------------ [0.356263, 0.329868, 0.369443, 0.358855, 1.000000, 0.365884, 0.383430, 0.341375, 0.369036, 0.352604, 0.413270, 0.323056, 0.378969, 0.389287, 0.409936, 0.335801, 0.373105, 0.999998, 0.296095, 0.501106, 0.394674, 0.325781, 0.423549, 0.344096, 0.429885, 0.422515, 0.414014, 0.350205, 0.404204, 0.359709, 0.358751, 0.398583, 0.000001, 0.999998, 0.327269, 0.399669, 0.351790, 0.356512, 0.386971, 0.352279, 0.346835, 0.328083, 0.381203, 0.363450, 0.360690, 0.359560, 0.347594, 0.381646, 0.343024, 0.327133, 0.364130, 0.000003, 0.315925, 0.348360, 0.368834, 0.000000, 0.330116, 0.386563, 0.379365, 0.332552, 0.339366, 0.000000, 0.364410, 0.346393, 0.336573, 0.386028, 0.322397, 0.383205, 0.351267, 0.360654, 0.344162, 0.385676, 0.308851, 0.397014, 0.348579, 0.370627, 0.351199, 0.322650, 0.294061, 0.394866, 0.364553, 0.369250, 0.342936, 0.384861, 0.336866, 0.363287, 0.315907, 0.000000, 0.378148, 0.377017, 0.000005, 0.403078, 0.368138, 0.353386, 0.351026, 0.396757, 0.337196, 0.362332, 0.332869, 0.383922, 0.386845, 0.372237, 0.263792, 0.511591, 0.356716, 0.351936, 0.364461, 0.327098, 0.372114, 0.332307, 0.287283, 0.417474, 0.342997, 0.379110, 0.335340, 0.393140, 0.273445, 0.357699, 0.352577, 0.412842, 0.346703, 0.000015, 0.248719, 0.508076, 0.371303, 0.342021, 0.368621, 0.416319, 0.367194, 0.401179, 0.415562, 0.373637, 0.397594, 0.403216, 0.375212, 0.389502, 0.330582, 0.392924, 0.000002, 0.393997, 0.375050, 0.409952, 0.355156, 0.370032, 0.349698, 0.404753, 0.359562, 0.323416, 0.349174, 0.351384, 0.366222, 0.999999, 0.375662, 0.000000, 0.334693, 0.382490, 0.337119, 0.374759, 0.371062, 0.000001, 0.329710, 0.380946, 0.354002, 0.369183, 0.369424, 0.361971, 0.344271, 0.357828, 0.337418, 0.355956, 0.370589, 0.322309, 0.371663, 0.371106, 0.373495, 0.342776, 0.345058, 0.289087, 0.350988, 0.288776, 0.335970, 0.301860, 0.354545, 0.290865, 0.360246, 0.295149, 0.336597, 0.275662, 0.349999, 0.280627, 0.363627, 0.330896]

xinsuinizhuan commented 4 years ago

I test the mnist_test_current example:

current epoch: 9 Batch index: 18500 loss: [0.095232] Batch index: 18600 loss: [0.077738] Batch index: 18700 loss: [0.114831] Batch index: 18800 loss: [0.054145] Batch index: 18900 loss: [0.119652] Batch index: 19000 loss: [0.141203] Batch index: 19100 loss: [0.130746] Batch index: 19200 loss: [0.156694] Batch index: 19300 loss: [0.094504] Batch index: 19400 loss: [0.133474] Batch index: 19500 loss: [0.101842] Batch index: 19600 loss: [0.167789] Batch index: 19700 loss: [0.137817] Batch index: 19800 loss: [0.095194] Batch index: 19900 loss: [0.092622] Batch index: 20000 loss: [0.168301] Batch index: 20100 loss: [0.119566] Batch index: 20200 loss: [0.119978] Batch index: 20300 loss: [0.098220] Batch index: 20400 loss: [0.068824] training time: 112.671 testing...

[0.000000, 0.999975, 0.000005, 0.000000, 0.000001, 0.000003, 0.000000, 0.000002, 0.000014, 0.000000] [0.991836, 0.000030, 0.000043, 0.000001, 0.000010, 0.000489, 0.007515, 0.000001, 0.000065, 0.000010] [0.000000, 0.999965, 0.000018, 0.000000, 0.000002, 0.000001, 0.000000, 0.000001, 0.000008, 0.000005] [0.000131, 0.000045, 0.017535, 0.000047, 0.952868, 0.000033, 0.019304, 0.001832, 0.000006, 0.008200] [0.999241, 0.000001, 0.000008, 0.000000, 0.000000, 0.000597, 0.000101, 0.000000, 0.000050, 0.000001] [0.997438, 0.000006, 0.000305, 0.000001, 0.000013, 0.000744, 0.001426, 0.000000, 0.000034, 0.000033] [0.000000, 0.000074, 0.000069, 0.000001, 0.001296, 0.000000, 0.000000, 0.994332, 0.000000, 0.004228] [0.000404, 0.000168, 0.108935, 0.083023, 0.051344, 0.656785, 0.000253, 0.000582, 0.000407, 0.098099] [0.000028, 0.000092, 0.003733, 0.187900, 0.001142, 0.406175, 0.000000, 0.000082, 0.000230, 0.400618] [0.000001, 0.000050, 0.000176, 0.967732, 0.000147, 0.031186, 0.000000, 0.000001, 0.000009, 0.000697]

current epoch: 9 Batch index: 18500 loss: [0.153518] Batch index: 18600 loss: [0.121651] Batch index: 18700 loss: [0.110696] Batch index: 18800 loss: [0.131797] Batch index: 18900 loss: [0.104950] Batch index: 19000 loss: [0.168174] Batch index: 19100 loss: [0.128729] Batch index: 19200 loss: [0.160364] Batch index: 19300 loss: [0.139499] Batch index: 19400 loss: [0.149552] Batch index: 19500 loss: [0.133199] Batch index: 19600 loss: [0.142955] Batch index: 19700 loss: [0.150852] Batch index: 19800 loss: [0.136140] Batch index: 19900 loss: [0.073126] Batch index: 20000 loss: [0.175370] Batch index: 20100 loss: [0.156097] Batch index: 20200 loss: [0.167977] Batch index: 20300 loss: [0.117402] Batch index: 20400 loss: [0.109422] training time: 99.9595 testing... [0.000302, 0.956791, 0.009048, 0.000092, 0.000128, 0.009201, 0.000043, 0.009225, 0.009494, 0.005675] [0.994560, 0.000001, 0.000014, 0.000007, 0.004701, 0.000276, 0.000034, 0.000005, 0.000072, 0.000331] [0.000001, 0.999994, 0.000001, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000003] [0.005460, 0.000002, 0.058666, 0.000263, 0.272481, 0.006074, 0.242127, 0.243626, 0.000095, 0.171205] [0.997296, 0.000000, 0.000005, 0.000004, 0.001986, 0.000466, 0.000006, 0.000001, 0.000089, 0.000146] [0.929288, 0.000034, 0.007074, 0.000180, 0.031476, 0.013697, 0.002414, 0.000395, 0.002303, 0.013139] [0.000000, 0.000521, 0.000041, 0.000068, 0.000656, 0.000325, 0.000003, 0.967088, 0.000034, 0.031264] [0.004739, 0.001908, 0.003696, 0.601413, 0.000493, 0.351347, 0.000029, 0.017700, 0.000793, 0.017883] [0.000486, 0.000004, 0.000027, 0.006749, 0.003891, 0.768541, 0.000006, 0.000015, 0.002091, 0.218191] [0.000000, 0.000929, 0.002141, 0.994526, 0.000001, 0.001835, 0.000000, 0.000045, 0.000016, 0.000508]

the loss is seems better, but the predict output seems not so better than before.

xinsuinizhuan commented 4 years ago

I don't know why it worser then before in my net srtuct and data.

josephjaspers commented 4 years ago

Are you using the latest version?

In the newest version, "get_string_architecture" now returns the input_shape and optimizer.

FeedForward:
    input_shape: [784]
    optimizer: Momentum
Tanh:
    input_shape: [256]
FeedForward:
    input_shape: [256]
    optimizer: Momentum
SoftMax:
    input_shape: [10]
Output_Layer:
    input_shape: [10]

If you are still having issues you could try lowering the learning rate, momentum tends to work better with a smaller learning rate (compared to SGD)

xinsuinizhuan commented 4 years ago

Are you using the latest version?

In the newest version, "get_string_architecture" now returns the input_shape and optimizer.

FeedForward:
  input_shape: [784]
  optimizer: Momentum
Tanh:
  input_shape: [256]
FeedForward:
  input_shape: [256]
  optimizer: Momentum
SoftMax:
  input_shape: [10]
Output_Layer:
  input_shape: [10]

If you are still having issues you could try lowering the learning rate, momentum tends to work better with a smaller learning rate (compared to SGD)

thank you. When i set the learning rate to 0.0005,it is better then before.

josephjaspers commented 4 years ago

Added Adam Optimizer: https://github.com/josephjaspers/blackcat_tensors/commit/ae5f458c9b2d691f1cb1d97094bc19139a0d9bd1

xinsuinizhuan commented 4 years ago

OK.Let me have a test!

josephjaspers commented 4 years ago

TODO:

Add the remaining optimizers listed here: https://pytorch.org/docs/stable/optim.html

josephjaspers commented 4 years ago

Added