Open ZhugeKongan opened 2 years ago
why not ?
imps=[]
label = np.array([dataset[i][1] for i in range(len(dataset))])
# print(label.shape)
for i in range(max(label)+1):
imp=len(label[np.where(label==i)])
imp=np.sqrt(imp*1.0)
imps.append(imp)
imps = imps/np.sum(imps)
return torch.from_numpy(np.array(imps))
Thanks for the comment. Just to make sure we are on the same page, your question is why we used a different task importance weight than Niu et al? We tried the method in Niu et al., and in our case it made the performance worse for both CORAL and the Niu et al. ordinal CNN. So, we tried different methods for task importance and found that the one we included worked best.
Thanks Reply. I still have some questions, the ordinal CNN usually refers to which paper? I have the same problem using Niu et al. task importance. Can i find the origin paper of your code " task_importance_weights(label_array) " ?
Thanks again for your reply!
Ordinal CNN is the method by Niu et al. 2016 you mentioned at the top of this thread ("Ordinal Regression with Multiple Output CNN for Age Estimation").
Sure. We have a task importance discussion in an earlier arxiv version of the paper but decided to remove it from the final paper because it was not super essential. You can find it here: https://arxiv.org/pdf/1901.07884v4.pdf
Thanks for reply !
`def task_importance_weights(label_array): uniq = torch.unique(label_array) num_examples = label_array.size(0)
IF this code are diffrent from origin paper"Ordinal Regression with Multiple Output CNN for Age Estimation"