Closed Engin007 closed 9 years ago
Hi, I had the same problem and I "solved" it by adding few code in the accuracy_layer.cpp. From here you can get information about each image, then you need to save it in (*top)[0]->mutable_cpu_data()[N], that will bring the information to the solver (solver.cpp). Here there is the printing and then you need to work from the terminal and gnuplot. It is not perfect, I would like some suggestion too ;) but you can have some idea at least.
Here my modification in accuracy_layer.cpp
................
for (int i = 0; i < num; ++i) {
// Top-k accuracy
std::vector<std::pair<Dtype, int> > bottom_data_vector;
for (int j = 0; j < dim; ++j) {
bottom_data_vector.push_back(
std::make_pair(bottom_data[i * dim + j], j));
}
std::partial_sort(
bottom_data_vector.begin(), bottom_data_vector.begin() + top_k_,
bottom_data_vector.end(), std::greater<std::pair<Dtype, int> >());
// check if true label is in top k predictions
int label = static_cast<int>(bottom_label[i]);
class_count[label]++;
if (label == 4){ // Label for the negative sample HARDCODED
++negative;
if (bottom_data_vector[0].second == static_cast<int>(bottom_label[i])) {
++accuracy;
class_good[label]++;
++true_negative;
} else {
++false_positive;
}
} else {
++positive;
if (bottom_data_vector[0].second == static_cast<int>(bottom_label[i])) {
++accuracy;
class_good[label]++;
++true_positive;
} else {
++false_negative;
}
}
}
(*top)[0]->mutable_cpu_data()[0] = accuracy / num;
//(*top)[0]->mutable_cpu_data()[0] = (true_positive+true_negative)/(positive+negative); //should be the same of the accuracy
(*top)[0]->mutable_cpu_data()[1] = true_positive/positive; //for ROC
(*top)[0]->mutable_cpu_data()[2] = false_positive/negative; //for ROC
//CLASS ERR
for (int i=0; i<7; i++) {
(*top)[0]->mutable_cpu_data()[3+i] = class_count[i];
(*top)[0]->mutable_cpu_data()[3+7+i] = class_good[i];
}
changing in solver.cpp, Test(const int test_net_id) function
...........
}
if (param_.test_compute_loss()) {
loss /= param_.test_iter(test_net_id);
LOG(INFO) << "Test loss: " << loss;
}
const char* class_label[] = {"name_class_1", "name_class_2", ...};
int output_index = 0;
for (int i = 0; i < test_score.size(); ++i) {
if(i < 3 || i == test_score.size()-1){ //HARDCODED for accuracy, tp, fp and loss
const int output_blob_index =
test_net->output_blob_indices()[test_score_output_id[i]];
const string& output_name = test_net->blob_names()[output_blob_index];
const Dtype loss_weight = test_net->blob_loss_weights()[output_blob_index];
ostringstream loss_msg_stream;
const Dtype mean_score = test_score[i] / param_.test_iter(test_net_id);
if (loss_weight) {
loss_msg_stream << " (* " << loss_weight
<< " = " << loss_weight * mean_score << " loss)";
}
LOG(INFO) << "Testing net (#" << test_net_id << ")" << " Test net output #" << output_index << ": " << output_name << " = "
<< mean_score << loss_msg_stream.str();
output_index++;
}
else
{
if(i < 10){ // HARDCODED for singol class accuracy
if(test_score[i] != 0){
const double mean_score = ((double)test_score[i+7]) / ((double)test_score[i]);
LOG(INFO) << "Class net (#" << test_net_id << ") " << class_label[i-3] << " accuracy: " << mean_score;
} else {
LOG(INFO) << "Class net (#" << test_net_id << ") " << class_label[i-3] << " accuracy: 0";
}
}
}
}
Oh great! I will take a look at it and give you my feedback. However code optimization is not my forte, just need to perform some analysis on my network. What are you testing your net on ?
Have any idea how I can implement that in the windows port of caffe?
After this changes, How to show ROC? What I should do - recompilation of caffe or write new layer ?
Hello Caffe Community,
I would like to know how I can access information needed i.e FP, TP, TN , FN to build a confusion matrix and eventually be able to use this info to plot an ROC curve. So basically I would like to know if there exists a function/layer that would help me give an output I can use. I have already read the layer catalogue but I am not sure which one would give out the classification results. (c++)
Thank you