-
Hello, if I just want to convert sentences into sentence vectors, can I just use the "'paraphrase-MiniLM-L12-v2" model to encode them directly?
-
Referring https://github.com/facebookresearch/fastText/issues/323#issuecomment-438265793
I would like this question addressed as to how can we obtain the sentence vector from individual word vector…
-
Hello,
I'm using fastText and I'm getting, for a document, a vector full of -nan(ind) when I use the option print-sentence-vector. However, if I ask for the vector of each word, with print-sentence…
-
I am following the official documentation scrips for semantic cache. In the following code
```
from redisvl.extensions.llmcache import SemanticCache
llmcache = SemanticCache(
name="llmcache", …
-
In the README.md, it says that one can get the embedding vectors using `fastdna print-sentence-vectors`; however, I note that this is commented out in the source code. I'd like to generate the embeddi…
-
Without the padding, the sentences end up being different sizes and we get stacking errors at data loading time.
-
When loading bin model trained with Python, the sentences embedding are different compared to Python.
**Node.js:**
![image](https://user-images.githubusercontent.com/104032572/209178219-59989a96-1…
-
I using same model same input sentence
but get different sentence vector by setting FP16 / FP32
-
-
Hi, I am using ELMo for Japanese. Here is my code:
```
from elmoformanylangs import Embedder
e = Embedder('/Users/tanh/Desktop/alt/JapaneseElmo')
if __name__ == '__main__':
sents = [
…