Closed sofiafh closed 7 years ago
I have trained the code and i have this result ............. traing iter: 296, test accuracy : 0.168306753039, loss : -0.132738232613 traing iter: 297, test accuracy : 0.168306753039, loss : -0.1305783391 traing iter: 298, test accuracy : 0.168306753039, loss : -0.128449633718 traing iter: 299, test accuracy : 0.168306753039, loss : -0.126348420978
final test accuracy: 0.168306753039 best epoch's test accuracy: 0.21241940558
Which TensorFlow version are you using?
Maybe you have the same problem as in the following past issue, trying to go from TensorFlow 0.11 to TensorFlow 1.0: https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition/issues/8
I am considering to update the code to TensorFlow 1.0 by myself maybe in a few months. TensorFlow's RNN API kept changing from TensorFlow 0.8 to 0.11 and then 1.0, it is starting to get annoying, hopefully, TensorFlow 1.0 should be more stable than the previous releases.
The neural network is tested on the test set. If you want to be sure that everything works well, the best would be to decouple the testing from the training, but that would be just for testing purposes and would be bad for improving the model since it would not be possible to see the test results over time during training.
To decouple testing from training, you could remove the testing loop during the training, and only load the X_train and Y_train after the training, that way you would avoid contamination of the training in the case there is. But I already tested this manually and the training is not contaminated by the testing data because the weight update operation is run only when feeding the network with training data.
If by "testing" you mean seeing how well it transfers to another dataset, I invite you to look at this repository where the project is first tested on another dataset and then the architecture of it is pushed further: https://github.com/guillaume-chevalier/HAR-stacked-residual-bidir-LSTMs
Updated to Python 3 and TensorFlow 1.0.0
Hi @guillaume-chevalier, want to know why the testing does not contaminated the training.
is it true that every time you use sess.run()
will update the model?
Hi,
While coding my project, this was disturbing me a little at the beginning. But I took the time to validate and there is no contamination. Let me explain why:
During the training, we do:
# Fit training using batch data
_, loss, acc = sess.run(
[optimizer, cost, accuracy],
feed_dict={
x: batch_xs,
y: batch_ys
}
)
Still during the training, we test with the test data by doing the same thing without fetching the optimizer
TensorFlow op.:
# Evaluation on the test set (no learning made here - just evaluation for diagnosis)
loss, acc = sess.run(
[cost, accuracy],
feed_dict={
x: X_test,
y: one_hot(y_test)
}
)
That said, it is the act of calling the optimizer
TensorFlow operation that will update the weights regarding to the given inputs of the feed_dict
. The optimizer
is not called at test time not to corrupt the learning. We don't use any moving average nor TensorFlow features that update at unspecified times, so we're safe.
As an example, if I remove the fetch to the optimizer
in the training phase, the results of the training will look like:
The model gets stuck at an accuracy of 0.18
, hence approximately 1/6th: outputting random predictions, hence not learning.
I have also tested the model while running the test only once at the end rather than continuously, and the results are the same!
Thanks! Another thing that I am concerning is whether you fetch the same accuracy and cost will influence the performance. Should it be separated with accuracy_train and accuracy_test in the training? By which I mean
# Fit training using batch data
_, loss, acc = sess.run(
[optimizer, cost_train, accuracy_train],
feed_dict={
x: batch_xs,
y: batch_ys
}
)
and
# Evaluation on the test set (no learning made here - just evaluation for diagnosis)
loss, acc = sess.run(
[cost_test, accuracy_test],
feed_dict={
x: X_test,
y: one_hot(y_test)
}
)
On Mon, May 29, 2017 at 6:35 AM Guillaume Chevalier < notifications@github.com> wrote:
Hi,
While coding my project, this was disturbing me a little at the beginning. But I took the time to validate and there is no contamination. Let me explain why:
During the training, we do:
# Fit training using batch data _, loss, acc = sess.run( [optimizer, cost, accuracy], feed_dict={ x: batch_xs, y: batch_ys } )
Still during the training, we test with the test data by doing the same thing without fetching the optimizer TensorFlow op.:
# Evaluation on the test set (no learning made here - just evaluation for diagnosis) loss, acc = sess.run( [cost, accuracy], feed_dict={ x: X_test, y: one_hot(y_test) } )
That said, it is the act of calling the optimizer TensorFlow operation that will update the weights regarding to the given inputs of the feed_dict. The optimizer is not called at test time not to corrupt the learning. We don't use any moving average nor TensorFlow features that update at unspecified times, so we're safe.
As an example, if I remove the fetch to the optimizer in the training phase, the results of the training will look like:
[image: image] https://cloud.githubusercontent.com/assets/11862328/26536525/dc9706e4-4405-11e7-9f1a-4a5ee9871f72.png
The model gets stuck at an accuracy of 0.18, hence approximately 1/6th: outputting random predictions, hence not learning.
I have also tested the model while running the test only once at the end rather than continuously, and the results are the same!
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition/issues/9#issuecomment-304570456, or mute the thread https://github.com/notifications/unsubscribe-auth/AGr84iMeqxCL5SKiKvT6FVAaTmOmJUZ7ks5r-kr4gaJpZM4McZrb .
Thank you for the code. Have you tried a test code, on raspberry pi or do you have any python code that i can do some tests with thank you