-
https://aclanthology.org/N16-1167/
-
Right now we are using `TfidfVectorizer` with its default options (basically word 1-gram). We should try a few different options and see how the accuracy changes:
- [ ] word 1-gram
- [ ] word 2-gram…
-
I used tfjs-tsne successfully on word embeddings from about 10 languages derived from https://fasttext.cc/docs/en/pretrained-vectors.html
But the Swedish one had all but a few of 20,000 items with …
-
Great work!!!!
I was wondering if its possible to replicate/extend the same work for Elmo embeddings
With tensorflow_hub, calling Elmo is not more complicated than:
```
def build_elmo():
…
-
For direct word embedding the output made sense
```
# natural language modeling embeddings
get_similar_words("horrible", word_embeddings)
# horrible terrible awful bad acting
# …
-
Hello,
I was reading the recent simCSE paper which referred to your paper when reporting the Average GloVe embedding results for the STS benchmarks. I originally created the issue in their [reposit…
-
`# Use BERT for mapping tokens to embeddings#
word_embedding_model = models.BERT('/home/lbc/chinese_wwm_ext_pytorch')
pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimensi…
-
Post questions here for this week's exemplary readings:
2. Aceves, P. & Evans, J. 2023. “[Mobilizing Conceptual Spaces: How Word Embedding Models Can Inform Measurement and Theory Within Organizati…
lkcao updated
6 months ago
-
Hi there, thank you very much for providing the code!
I am new to diffusion model, so I apologize in advance if I ask a dumb question.
In [this line](https://github.com/XiangLi1999/Diffusion-LM…
-
Hi,
I noticed that in `main.py`, you zero out the embeddings for special words if they are absent in vocabulary:
```python
# zero out the embeddings for padding and other special words if they ar…