-
Currently the `repr(doc)` of a doc produces the concatenation of the CoNLL-U representation of all sentences but without the comments indicating medata like sentence ids.
A method to save to a CoNL…
kleag updated
4 months ago
-
## 一言でいうと
良質な文表現を得るために、BERTの学習を組み合わせた研究。Encode => Average Pooling => MLPで文表現を得た後、次文の入力に結合してMasked LMの学習を行う(文表現用と学習用Encoderは重みを共有する)。隣接文情報からのMask予測という点で文レベルのSkip Thoughtに近い。多言語NLPで優秀な精度。
![image]…
-
Hi author,
I'm so sorry to bother you again!
About the coarse-grained module which the paper has illustrated the goal of this module is to capture the image-target relevance, I have some confusions…
-
i am trying to understand where exactly the dropout is applied to get two representation of same input text in this exampl
https://github.com/UKPLab/sentence-transformers/blob/master/examples/unsuper…
-
Add a graphical representation of the dependency tree. There is js code that does this in korp-frontend, but integegrating it here might not be so trivial. I haven't even checked the license.
Doing…
-
I've set the random seed when I fit my topic model, and I'm getting reproducible results. I'm using the following:
```
def fit_reduce_model(rep_model, docs):
"""
Defines all component mo…
-
I noticed that you use an encoder-decoder model T5 but not a decoder-only model as the source LLM due to "easily get each input doc‘s hidden_states separately".
If I use a decoder-only model, get eac…
-
In the [SBERT repository](https://www.sbert.net/examples/training/adaptive_layer/README.html), I found the adaptive layers method referenced in this paper: [_**ESE**: Espresso Sentence Embeddings_](ht…
-
# Semantic Textual Similarity
## Task Objective
Evaluate the semantic understanding level of the models by comparing with the human-labeled sentence similarity. The task is part of the metatask http…
-
Hi, I wonder why LSTM or RNN based methods cannot tackle bag-level relation extraction? It seems like they can only tackle sentence-level relation classification, but I think CNN or LSTM is only the e…