I fine-tuned my own data based on the train_val.prototxt in which I change the num_output to 12(I just prepared 12 class person) and the name of conv10 to myconv10. When training, the accuracy reached 1 quickly, like below:
I0122 16:43:48.676445 13661 solver.cpp:218] Iteration 40 (0.0557035 iter/s, 718.088s/40 iters), loss = -nan
I0122 16:43:48.676497 13661 solver.cpp:237] Train net output #0: accuracy = 1
I0122 16:43:48.676512 13661 solver.cpp:237] Train net output #1: accuracy_top5 = 1
I0122 16:43:48.676530 13661 solver.cpp:237] Train net output #2: loss = -nan (* 1 = -nan loss)
I0122 16:43:48.676544 13661 sgd_solver.cpp:105] Iteration 40, lr = 0.03984
but sadly, when doing the prediction, I found the out put of prob layer is nan, here is the result:
output {'prob': array([[[[nan]],
This isn't the best place to do question-and-answer on custom applications of SqueezeNet or training SqueezeNet on custom datasets.
This seems like a question for StackOverflow.
I fine-tuned my own data based on the train_val.prototxt in which I change the num_output to 12(I just prepared 12 class person) and the name of conv10 to myconv10. When training, the accuracy reached 1 quickly, like below:
I0122 16:43:48.676445 13661 solver.cpp:218] Iteration 40 (0.0557035 iter/s, 718.088s/40 iters), loss = -nan I0122 16:43:48.676497 13661 solver.cpp:237] Train net output #0: accuracy = 1 I0122 16:43:48.676512 13661 solver.cpp:237] Train net output #1: accuracy_top5 = 1 I0122 16:43:48.676530 13661 solver.cpp:237] Train net output #2: loss = -nan (* 1 = -nan loss) I0122 16:43:48.676544 13661 sgd_solver.cpp:105] Iteration 40, lr = 0.03984
but sadly, when doing the prediction, I found the out put of prob layer is nan, here is the result: output {'prob': array([[[[nan]],
did anybody meet this before?