liuwei1206 / LEBERT

Code for the ACL2021 paper "Lexicon Enhanced Chinese Sequence Labelling Using BERT Adapter"
336 stars 60 forks source link

请问读入word_embedding需要多少内存啊? 32G貌似不够 #47

Closed bushuohua12 closed 2 years ago

liuwei1206 commented 2 years ago

Hi,

It shouldn't be. We do not load the whole embedding table, but only use the part that conains the words in our corpus.