pmhalvor / fgsa

Main repo for my Masters thesis on Fine Grained Sentiment Analysis
4 stars 0 forks source link

Test BertHead with unique learning rates, but also dropout #12

Open pmhalvor opened 2 years ago

pmhalvor commented 2 years ago

(and make sure scheduler is working?)

The point being to decrease variability of the more ambiguous tasks. Right now the model is really only learning holders very well.

... Ok "very well". Not necessarilty well at all since hard f1 scores are still low af.

pmhalvor commented 2 years ago

Another way of going about this is to give those tasks with higher variability more parameters to train.

For example, current IMN setup w/ 2 times as many expression layers as target/polarity