oudalab / StructuredEventExtraction

0 stars 0 forks source link

Now use no pretrained word embedding setting. Add in pretrained embedding #1

Open YanLiang1102 opened 3 years ago

YanLiang1102 commented 3 years ago

The micro-recall is very low.

start testing...
using device cuda
processed 193229 tokens with 18904 phrases; found: 2484 phrases; correct: 2330.
accuracy:  91.20%; precision:  93.80%; recall:  12.33%; FB1:  21.79
          achieve: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
           action: precision:  81.82%; recall:  29.03%; FB1:  42.86  55
         adducing: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
agree_or_refuse_to_act: precision: 100.00%; recall:  27.81%; FB1:  43.52  42
           aiming: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
        arranging: precision: 100.00%; recall:   1.50%; FB1:   2.96  3
           arrest: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
         arriving: precision: 100.00%; recall:   2.94%; FB1:   5.71  9
       assistance: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
           attack: precision:  89.12%; recall:  20.79%; FB1:  33.72  147
            award: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
     bearing_arms: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
         becoming: precision:  99.34%; recall:  63.29%; FB1:  77.32  151
becoming_a_member: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
being_in_operation: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
        besieging: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
      bodily_harm: precision:  89.47%; recall:  22.08%; FB1:  35.42  57
    body_movement: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
        breathing: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
         bringing: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
         building: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
      carry_goods: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
      catastrophe: precision: 100.00%; recall:   0.12%; FB1:   0.24  1
        causation: precision:  91.89%; recall:  41.27%; FB1:  56.96  296
cause_change_of_position_on_a_scale: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
cause_change_of_strength: precision:  90.57%; recall:  20.69%; FB1:  33.68  53
cause_to_amalgamate: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
cause_to_be_included: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
cause_to_make_progress: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
           change: precision:  95.83%; recall:  14.20%; FB1:  24.73  24
change_event_time: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
change_of_leadership: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
 change_sentiment: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
      change_tool: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
            check: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
         choosing: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
    collaboration: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
    come_together: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
     coming_to_be: precision:  89.66%; recall:  26.17%; FB1:  40.52  87
coming_to_believe: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
     commerce_buy: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
     commerce_pay: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
    commerce_sell: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
       commitment: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
 committing_crime: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
    communication: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
      competition: precision:  85.71%; recall:   0.85%; FB1:   1.68  7
confronting_problem: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
          connect: precision:   0.00%; recall:   0.00%; FB1:   0.00  0
....
YanLiang1102 commented 3 years ago

The recall is off from the paper result, the reason might be the paper recall consider the "negative instance" might need to take a look.

YanLiang1102 commented 3 years ago

after consider O in precision and recall, the recall is quite high in a micro way. on the dev-dataset.

processed 193229 tokens with 192575 phrases; found: 192849 phrases; correct: 175886. accuracy: 91.20%; precision: 91.20%; recall: 91.33%; FB1: 91.27 achieve: precision: 0.00%; recall: 0.00%; FB1: 0.00 0 action: precision: 81.82%; recall: 29.03%; FB1: 42.86 55 adducing: precision: 0.00%; recall: 0.00%; FB1: 0.00 0 agree_or_refuse_to_act: precision: 100.00%; recall: 27.81%; FB1: 43.52 42 aiming: precision: 0.00%; recall: 0.00%; FB1: 0.00 0 arranging: precision: 100.00%; recall: 1.50%; FB1: 2.96 3 arrest: precision: 0.00%; recall: 0.00%; FB1: 0.00 0 arriving: precision: 100.00%; recall: 2.94%; FB1: 5.71 9

YanLiang1102 commented 3 years ago

without using the pretrained embedding submit using bilstm-crf submit to leadboard get this result on testing:

Micro_F1: 22.920265
Micro_Precision: 93.235294
Micro_Recall: 13.066178
Macro_F1: 7.039132
Macro_Precision: 20.702837
Macro_Recall: 4.861637
YanLiang1102 commented 3 years ago

https://competitions.codalab.org/competitions/27320#results