intfloat / SimKGC

ACL 2022, SimKGC: Simple Contrastive Knowledge Graph Completion with Pre-trained Language Models
188 stars 36 forks source link

Pre-trained Language Model with bert-base-uncased and bert-large-uncased. #37

Closed yw3l closed 8 months ago

yw3l commented 10 months ago

Hi, @intfloat I used batch-size 64 on a piece of 3090 and evaluated the results with bert-base-uncased and bert-large-uncased respectively. The results show that bert-base-uncased gives better results. During the training process, I vaguely observed that using bert-large-uncased seems to have a slower decrease in loss. I am wondering if running more epochs, the performance of bert-large-uncased will keep up with bert-base-uncased or even surpass it. Looking forward to your reply.