Open pmhalvor opened 2 years ago
(and make sure scheduler is working?)
The point being to decrease variability of the more ambiguous tasks. Right now the model is really only learning holders very well.
... Ok "very well". Not necessarilty well at all since hard f1 scores are still low af.
Another way of going about this is to give those tasks with higher variability more parameters to train.
For example, current IMN setup w/ 2 times as many expression layers as target/polarity
(and make sure scheduler is working?)
The point being to decrease variability of the more ambiguous tasks. Right now the model is really only learning holders very well.
... Ok "very well". Not necessarilty well at all since hard f1 scores are still low af.