Open Coding511 opened 2 years ago
@sailist, could you please clear this doubt?
No matter WA means, it can't be the case in the code. But it doesn't matter, bacause there is no significant gap between WA and UA in this paper. Also, there is no need to compare this paper because the result is too slower than the current sota result (82%+)
@sailist ,Thanks for the reply, but I am not asking about the accuracy of this paper. He has computed accuracies wrongly. However, both the accuracies are different. So could you please see the train.py file and confirm this?
whether I am correct or not?
Also can you plz share the link of the SOTA paper, I think its more than this here
I think you mistakenly evaluated WA and UA accuracies in
train.py
. The WA is the average class accuracy, and UA is the total accurate samples divided by the total samples.
You are right, but the method to calculate weight
of each class may be different in different case, I'm still not sure about it.
In IEMOCAP dataset, COGMEN is also the paper I found is sota, but I cannot reproduce the result in 6-way experiment setting. MMGCN achieves best results in MELD.
And, If you feed multi-modal feature for DAG-ERC , it may achieve better results than COGMAN/MMGCN in some situation.
@sailist weighted and unweighted accuracy definitions are universal for multiclass problems. Here the definitions are interchanged. I don't know why the author is not responding here,
I think you mistakenly evaluated WA and UA accuracies in
train.py
. The WA is the average class accuracy, and UA is the total accurate samples divided by the total samples.I think this is a huge mistake.