Open ZillaRU opened 2 years ago
I have run this code on my real-world datasets and got the predicted result. Since Decagon regards the relation and its corresponding reverse one as different relations, for each triplet, there are two predicted scores. Did you calculate the performance metric values reported in your paper based on both?
I have run this code on my real-world datasets and got the predicted result. Since Decagon regards the relation and its corresponding reverse one as different relations, for each triplet, there are two predicted scores. Did you calculate the performance metric values reported in your paper based on both?
https://github.com/mims-harvard/decagon/blob/86ff6b1423e548c22cbb8f70c5dac22b79d45290/main.py#L295-L302 But, you seem to calculate the metrics for the relation and the corresponding reverse one separately. Finally, which is reported in your paper?
Typically, DDI prediction is a pairwise classification problem. In the given toy example, you artificially generate a small graph. My concern is that Decagon seems to treat a DDI and its corresponding reverse one as two different edges. But, I am confused by how to calculate the metrics with the predicted results for potential DDI triplets and the reverse ones. For instance, "Drug A's metabolism is increased when combined with Drug B" (symbolized as triplet
(A, metabolism increased, B)
) is semantically equal to the reverse one "Drug B can increase the metabolism of Drug A" (symbolized as triplet(B, increase metabolism, A)
). https://github.com/mims-harvard/decagon/blob/86ff6b1423e548c22cbb8f70c5dac22b79d45290/main.py#L140-L145 If the model gives different scores for both triplets, how to calculate the final metric values. By simply keeping predicted results for both groups?