microsoft / unilm

Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
https://aka.ms/GeneralAI
MIT License
20.2k stars 2.55k forks source link

Sequence Labeling Task:why does the fine-tuned model evaluated same result as the pre-trained model evaluating? #302

Open croari opened 3 years ago

croari commented 3 years ago

Describe Model I am using LayoutLM : I use some my own dataset to train the Sequence Labeling Task, the process was normal and completed well. And the loss graph that use tensorboard looks convergency. But now I found that to the same data ,the evaluated result from the fine-tuning trained model is same as the pre-trained model evaluated. Sames like the fine-tuning model had not been used? Did you have this problem? Did I do something wrong?

sumeetsuman83 commented 3 years ago

@croari Did you find any solution for this?