UKPLab / sentence-transformers

State-of-the-Art Text Embeddings
https://www.sbert.net
Apache License 2.0
15.25k stars 2.47k forks source link

load other models like the old version, error:encode() got an unexpected keyword argument 'convert_to_tensor' #446

Open Williamlizl opened 4 years ago

Williamlizl commented 4 years ago

`# Use BERT for mapping tokens to embeddings# word_embedding_model = models.BERT('/home/lbc/chinese_wwm_ext_pytorch')

pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension(), pooling_mode_mean_tokens=True, pooling_mode_cls_token=False, pooling_mode_max_tokens=False)

model = SentenceTransformer(modules=[word_embedding_model, pooling_model])

corpus_embeddings = model.encode(corpus, convert_to_tensor=True) ` TypeError: encode() got an unexpected keyword argument 'convert_to_tensor'

nreimers commented 4 years ago

Could you post a clean version of your issue?

Do you use the most recent version of sentence transformers?

Williamlizl commented 4 years ago

i use the

Could you post a clean version of your issue?

Do you use the most recent version of sentence transformers?

I use the old version , so in the recent version, other models not in list can not be loaded?

nreimers commented 4 years ago

convert_to_tensor was added in a recent version. If you use an old version, you cannot pass that parameter to encode.

Try: corpus_embeddings = model.encode(corpus)

It works with older version of sentence transformers

Williamlizl commented 4 years ago

convert_to_tensor was added in a recent version. If you use an old version, you cannot pass that parameter to encode.

Try: corpus_embeddings = model.encode(corpus)

It works with older version of sentence transformers

convert_to_tensor=True,it means use tensor to speed up?

nreimers commented 4 years ago

No. Is means to convert the output to a tensor.

Without that parameter (and in old versions), you get a list with tensors instead of a single, large tensor.