the following is the right calculation of attribute's recall?
when i deal with the case of fine-grained attribute dataset, there is 6 big category and 26 small category.
if i convert big category to small category by using one-hot representation, the result will like following.
(the followig
the following is the right calculation of attribute's recall?
when i deal with the case of fine-grained attribute dataset, there is 6 big category and 26 small category. if i convert big category to small category by using one-hot representation, the result will like following. (the followig
pred : [1,0,0,0. , 1,0,0,0,0 , 1,0,0,0,0,1,0,0,0,0,1,0,0,0,0,1,0] true : [0,0,0,1. , 0,0,0,0,1 , 1,0,0,0,0,1,0,0,0,0,1,0,0,0,0,1,0]
in this case, tp = (y_true y_pred).sum() tn = ((1 - y_true) (1 - y_pred)).sum() fp = ((1 - y_true) y_pred).sum() fn = (y_true (1 - y_pred)).sum()
tp = 4 tn = 18 fp = 2 fn = 2 acc is 22/26 and recall = precision is 2/6
is this calculation right?