Open Williamlizl opened 4 years ago
Could you post a clean version of your issue?
Do you use the most recent version of sentence transformers?
i use the
Could you post a clean version of your issue?
Do you use the most recent version of sentence transformers?
I use the old version , so in the recent version, other models not in list can not be loaded?
convert_to_tensor was added in a recent version. If you use an old version, you cannot pass that parameter to encode.
Try: corpus_embeddings = model.encode(corpus)
It works with older version of sentence transformers
convert_to_tensor was added in a recent version. If you use an old version, you cannot pass that parameter to encode.
Try: corpus_embeddings = model.encode(corpus)
It works with older version of sentence transformers
convert_to_tensor=True,it means use tensor to speed up?
No. Is means to convert the output to a tensor.
Without that parameter (and in old versions), you get a list with tensors instead of a single, large tensor.
`# Use BERT for mapping tokens to embeddings# word_embedding_model = models.BERT('/home/lbc/chinese_wwm_ext_pytorch')
pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension(), pooling_mode_mean_tokens=True, pooling_mode_cls_token=False, pooling_mode_max_tokens=False)
model = SentenceTransformer(modules=[word_embedding_model, pooling_model])
corpus_embeddings = model.encode(corpus, convert_to_tensor=True) ` TypeError: encode() got an unexpected keyword argument 'convert_to_tensor'