Open yang0369 opened 1 year ago
Updates: I managed to get some evaluation metrics after doubling the No. of Epochs.
evaluation metrics: {'eval_precision': 0.2516422435573522, 'eval_recall': 0.6058394160583942, 'eval_f1': 0.3555872902534809, 'eval_loss': 0.27880820631980896, 'eval_runtime': 3.1782, 'eval_samples_per_second': 16.361, 'epoch': 68.49}
Though the f1 score is still quite low, at least it proves that my LayoutLMV2 is working with RE head, I shall try with more Epochs to boost the f1 score. Notice that LayoutLMV2 requires more No. of epochs to reach to similar f1 score as LayoutXLM. I guess LayoutXLM converges faster than LayoutLMv2 regarding English dataset, not quite sure why, but just post this as a reference to the ppl who are exploring with Relation Extraction task. Cheers.
hello yang0369
im also trying to perform relation extraction on funsd dataset. Can you provide me with some code for guidance.
Dataset formats for xfund and funsd are different for which neilsrogge finetuned his modle for relation extraction
@NielsRogge Dear Author, really appreciate that you have created great notebooks to guide us on how to perform relation extraction task using XFUN dataset by LayoutXLM model.
With your notebook, I managed to run relation extraction successfully for FUNSD (in English Language) by LayoutXLM with the following metrics:
evaluation metrics: {'eval_precision': 0.40963855421686746, 'eval_recall': 0.5037037037037037, 'eval_f1': 0.4518272425249169, 'eval_loss': 0.08972907066345215, 'eval_runtime': 3.3059, 'eval_samples_per_second': 16.637, 'epoch': 34.01}
I understood LayoutXLM is built on top of LayoutLMV2. For English dataset like FUNSD, I feel like experimenting with LayoutLMV2 instead of LayoutXLM. Thus I have made three changes in total to change it to LayoutLMV2:
However, the evaluation metrics is always 0 (precision, recall and f1 all equal to 0) no matter how many epochs I have run the finetuning. Therefore, I would like to consult you if I have missed any steps to convert from LayoutXLM to LayoutLMV2? Thank you!