zjunlp / MKGformer

[SIGIR 2022] Hybrid Transformer with Multi-level Fusion for Multimodal Knowledge Graph Completion
MIT License
168 stars 28 forks source link

performance result #10

Closed Chenfeng1271 closed 2 years ago

Chenfeng1271 commented 2 years ago

Hi, thanks for your contributions. when I run MNER task, i.e., twitter17, the final result is a little different from the paper. I finally get f1 86.54 by running four times but it should be 87.49. Does the torch version or other factors influence the final results?

My env is Tsela K40, Cuda 10.1 Torch 1.7.0

flow3rdown commented 2 years ago

Different environments may have different best hyperparameters. Maybe you can tune the learning rate and batch size. (learning rate is recommended to be between 1e-5 and 5e-5 and batch size is recommended to be between 8 and 32)