Open zeynepgokce opened 6 years ago
try use labels = tf.one_hot(labels_placeholder, c3d_model.NUM_CLASSES) loss = -tf.reduce_sum(labels*tf.log(tf.clip_by_value(tf.nn.softmax(logit),1e-10,1.0)),1.0) instead
@zeynepgokce hi, could you tell me how to print the loss, where should i add code? thank you.
@491506870 Hi, i just added "loss" to session simply, as following
summary, acc,l = sess.run( [merged, accuracy,loss], feed_dict={images_placeholder: train_images, labels_placeholder: train_labels }) print ("accuracy: " + "{:.5f}".format(acc)) print(" Loss : ",l)
@zeynepgokce thank you so much!!! which helps me a lot~~and i think i met the same problem with you, my loss became NAN after several steps, what did you do to solve it?
@491506870 , Problem was related to my own Dataset Labelling. I changed my labelling like starting from 0 to 2 since i have 3 classes.
Hello everyone,
I have a question about training this model with different dataset.
When i finetune the c3d model with UCF101 data, there is no problem. But when i change the dataset i have got this error that loss is Nan Value.
I tried some ways to handle this problem which did not solve it. 1) changed the learning rate 2) changed batch size 3) tried with small test and train split ( with #num : 3)
For instance, these are steps of training same model with different dataset. Learning rates and batch size are same with original as this model. Just dataset is changed.
Where should be the problem ? Why does not this model work with different dataset ? Any suggestion? Thank you.