Open binhhoangtieu opened 6 years ago
Step 0: 9.869 sec Training Data Eval: accuracy: 1.00000 Validation Data Eval: accuracy: 1.00000 Step 1: 2.662 sec Step 2: 2.699 sec Step 3: 2.657 sec Step 4: 2.689 sec Step 5: 2.695 sec Step 6: 2.661 sec Step 7: 2.743 sec Step 8: 2.775 sec Step 9: 2.694 sec Step 10: 2.696 sec Training Data Eval: accuracy: 1.00000 Validation Data Eval: accuracy: 1.00000 Step 11: 2.688 sec Step 12: 2.844 sec Step 13: 2.736 sec Step 14: 2.777 sec Step 15: 2.744 sec Step 16: 2.784 sec Step 17: 2.730 sec Step 18: 2.717 sec Step 19: 2.792 sec Step 20: 2.677 sec Training Data Eval: accuracy: 1.00000 Validation Data Eval:1.000
I think we have the same problem
Try to change your batch_size?
it's 8 at first I changed it to 10 it turns out that the accuracy is still 1.00000 @binhhoangtieu
DId u solve this by changing batch_size? @binhhoangtieu
Yes, solved with my problem because my accuracy is changed after each iteration. I'm not sure about your case.
Hello @cckenny and @binhhoangtieu , I get the similar error. Accuracy is always zero. I changed the batch size as you said , but it is still 0. I have just changed some parts of my code as following, Have any suggestion ?
init = tf.global_variables_initializer()
# Create a saver for writing training checkpoints.
saver = tf.train.Saver(varlist1) # Edit : Add ops to save and restore only varibles without out layer using the name "varlist1"
# Create a session for running Ops on the Graph.
sess = tf.Session(
config=tf.ConfigProto(allow_soft_placement=True)
)
sess.run(init)
if os.path.isfile(model_filename) and use_pretrained_model:
saver.restore(sess, model_filename) # EDIT
print("model is restored")
The output : . . . Training Data Accuracy: 0.00000 Validation Data Accuracy: 0.00000 Step 2241: 7.663 sec Training Data Accuracy: 0.00000 Validation Data Accuracy: 0.00000 Step 2242: 7.619 sec Training Data Accuracy: 0.00000 Validation Data Accuracy: 0.00000 Step 2243: 7.686 sec Training Data Accuracy: 0.00000 Validation Data Accuracy: 0.00000 Step 2244: 8.781 sec Training Data Accuracy: 0.00000 Validation Data Accuracy: 0.00000
@cckenny @binhhoangtieu have you solved the problem? I met the same problem with you. Step 5000: 5.591 sec Training Data Eval: accuracy: 1.00000 Validation Data Eval: accuracy: 1.00000 Step 5100: 1.235 sec Training Data Eval: accuracy: 1.00000 Validation Data Eval: accuracy: 1.00000 Step 5200: 1.235 sec Training Data Eval: accuracy: 1.00000 Validation Data Eval: accuracy: 1.00000
@zeynepgokce : Check your input data and pre-trained model restoration
@cckenny @binhhoangtieu have you solved the problem? I met the same problem with you. Step 5000: 5.591 sec Training Data Eval: accuracy: 1.00000 Validation Data Eval: accuracy: 1.00000 Step 5100: 1.235 sec Training Data Eval: accuracy: 1.00000 Validation Data Eval: accuracy: 1.00000 Step 5200: 1.235 sec Training Data Eval: accuracy: 1.00000 Validation Data Eval: accuracy: 1.00000
Have you solved this problem? I can't settle this
This is my first iterations. Is normal? (I keep all parameters as original code except batch_size) Step 0: 61.349 sec Training Data Eval: accuracy: 1.00000 Validation Data Eval: accuracy: 1.00000 Step 1: 60.673 sec Step 2: 60.542 sec Step 3: 60.960 sec Step 4: 60.619 sec Step 5: 60.658 sec Step 6: 60.709 sec Step 7: 60.616 sec Step 8: 60.445 sec Step 9: 60.547 sec Step 10: 60.527 sec Training Data Eval: accuracy: 0.90000 Validation Data Eval: accuracy: 1.00000 Step 11: 60.612 sec Step 12: 60.600 sec Step 13: 61.529 sec Step 14: 60.945 sec Step 15: 60.388 sec Step 16: 60.548 sec Step 17: 60.278 sec Step 18: 60.645 sec Step 19: 60.625 sec Step 20: 61.114 sec Training Data Eval: accuracy: 1.00000 Validation Data Eval: accuracy: 0.80000