-
Hello, I'm very interested in your work, and I'm currently attempting to train a general sentence representation model. I have a question: When my training dataset comes from different domains, how ca…
-
Hello! I'm very thankful for your tool and library it's so much awesome!
I should apologise for my basic (stupid) question in advance, however i can't solve it.
First, i'm making model with .fit…
-
The suggested [use of Bliss symbols](https://w3c.github.io/personalization-semantics/content/index.html#symbol-example) seems intriguing (I acknowledge potential), but also seems very immature. As one…
-
I tried to bench mark this tool on the Mass Dataset in the same setting as mentioned in the paper (Distributed Representations of Sentences and Documents). Instead of testing it directly, I had create…
-
I noticed that we sometimes use full capitalization for headers, e.g. the [representation page](https://cryptimeleon.github.io/docs/representations.html), and sometimes regular sentence capitalization…
-
# title
An efficient framework for learning sentence representations
# notes
通过让模型学习从一堆句子中挑出给定句子的下一句的方法来训练得到sentence representation
![image](https://user-images.githubusercontent.com/3295342/383…
-
Can you give me some pointers on how to modify the code to use the final hidden state of the LSTM as an embedding/representation of a sequence of words?
What I want to achieve is to train the langu…
-
AMR files usually start with an id and the sentence before the actual PENMAN graph comes
```
# ::id any-ID-001.1
# ::snt the cat is sleeping
# ::save-date Sat Jul 20, 2024 ::file test_0001_2.txt…
-
Hi, can I build an embedding model for chemical structures? I will not use graph method directly — want to start with with IUPAC representation (text data).
How can I leverage sentence transformers…
-
BERT is 'Bidirectional Encoder Representations from Transformers', so how the transformer reflect 'bidirectional', and why GPT don't?