Open littttttlebird opened 3 years ago
This is an interesting question! I think in this case SupCon might not be as good as simply using binary cross-entropy for each label.
How about using multiple classifiers for each label after a shared encoder (in this case ResNet), and that the loss is the sum of multiple SupCon? It is just an idea, and I have no idea whether it works.
Would it be sufficient to just modify the mask iand my input labels in such a way that the mask correctly identifies data points who share at least one class or is it necessary to modify more then just the mask in this loss function?
I have a text multi-label classification task,can i use supCon loss ? supCon loss is accumulated by every label view,for example:
batch data label = [[1, 0, 1], [0, 1, 1], [1, 1, 0], [0, 1, 1] ]
from view label 0, positive examples = {0, 2},negative samples = {1, 3} from view label 1, positive examples = {1, 2,3}, negative samples = {0} from view label 2, positive examples = {0, 1, 2}, negative samples = {2}is this setting here reasonable ?