hy-struggle / PRGC

PRGC: Potential Relation and Global Correspondence Based Joint Relational Triple Extraction
107 stars 16 forks source link

f=0,p=0,r=0 #26

Open Wangrulin-1128 opened 11 months ago

Wangrulin-1128 commented 11 months ago

When I use my dataset to run ,the output is 0 which f=0,p=0 and r=0. But while I use the dataset of PRGC, the output is normal. How can i solve this problem? Thanks

beiyaoovo commented 4 months ago

me too...

youngsasa2021 commented 4 months ago

me,too

beiyaoovo commented 4 months ago

me,too

I found that this problem can be solved by modifying the learning rate,

    # learning rate
    self.fin_tuning_lr = 1e-4
    self.downs_en_lr = 1e-3
    self.clip_grad = 2.
    self.drop_prob = 0.3  # dropout
    self.weight_decay_rate = 0.01
    self.warmup_prop = 0.1
    self.gradient_accumulation_steps = 2

I change it to

    self.fin_tuning_lr = 5e-5
    self.downs_en_lr = 5e-4

You can modify it according to your needs.

But the f1 value is very low, only around 0.2. If you have resolved the issue with f1 value, please notify me.

youngsasa2021 commented 4 months ago

Thanks for your suggestion.I'll try it!

At 2024-04-11 19:11:51, "beiyaoovo" @.***> wrote:

me,too

I found that this problem can be solved by modifying the learning rate,

# learning rate
self.fin_tuning_lr = 1e-4
self.downs_en_lr = 1e-3
self.clip_grad = 2.
self.drop_prob = 0.3  # dropout
self.weight_decay_rate = 0.01
self.warmup_prop = 0.1
self.gradient_accumulation_steps = 2

I change it to

self.fin_tuning_lr = 5e-5
self.downs_en_lr = 5e-4

You can modify it according to your needs.

But the f1 value is very low, only around 0.2. If you have resolved the issue with f1 value, please notify me.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

youngsasa2021 commented 4 months ago

although I modify as you say,the results (f,p,r) are still 0.

At 2024-04-11 19:11:51, "beiyaoovo" @.***> wrote:

me,too

I found that this problem can be solved by modifying the learning rate,

# learning rate
self.fin_tuning_lr = 1e-4
self.downs_en_lr = 1e-3
self.clip_grad = 2.
self.drop_prob = 0.3  # dropout
self.weight_decay_rate = 0.01
self.warmup_prop = 0.1
self.gradient_accumulation_steps = 2

I change it to

self.fin_tuning_lr = 5e-5
self.downs_en_lr = 5e-4

You can modify it according to your needs.

But the f1 value is very low, only around 0.2. If you have resolved the issue with f1 value, please notify me.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>