Closed RuiMao1988 closed 4 years ago
Did you find the solution? @RuiMao1988
Hi @dheerajiiitv You need to construct the model from scratch:
# Use BERT for mapping tokens to embeddings
word_embedding_model = models.BERT('path/to/your/bert/model')
# Apply mean pooling to get one fixed sized sentence vector
pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension(),
pooling_mode_mean_tokens=True,
pooling_mode_cls_token=False,
pooling_mode_max_tokens=False)
model = SentenceTransformer(modules=[word_embedding_model, pooling_model])
This code
model = SentenceTransformer('/Users/Terry/chinese_roberta_wwm_ext_pytorch')
Only works for sentence-transformer models.
Hi,
Thanks for the nice work.
I downloaded a pre-trained Chinese RoBerta model. The model is pre-trained BERT like, containing 3 files, namely "config.json", "pytorch_model.bin" and "vocab.txt"
I tried to load this model with
model = SentenceTransformer('/Users/Terry/chinese_roberta_wwm_ext_pytorch')
However, I got an error:
KeyError Traceback (most recent call last)