The results of inference with cybertron's sentence-transformers/LaBSE model are not consistent with Python, is it because I am missing some steps?
It runs fine, but the output does not match the one run by Python, resulting in a text similarity result that is very different from Python and inaccurate.
from sentence_transformers import SentenceTransformer
sentences = ["That is a happy person"]
model = SentenceTransformer('sentence-transformers/LaBSE')
embeddings = model.encode(sentences)
print(embeddings[0])
The results of inference with cybertron's sentence-transformers/LaBSE model are not consistent with Python, is it because I am missing some steps? It runs fine, but the output does not match the one run by Python, resulting in a text similarity result that is very different from Python and inaccurate.
golang code as follows:
Output results:
Python code is as follows:
Output results: