-
The difference between training accuracy and testing accuracy is large pointing at overfitting. The problem seems to be because of very basic pre-processing. We are just indexing each unique word and …
-
Hi,
Apologies if I overlooked it, but how can I encode unseen test documents from previously trained embeddings (without retraining)?
Thanks
-
## URL(s) with the issue:
https://www.tensorflow.org/tutorials/text/word2vec
## Description of issue (what needs changing):
I run the word2vec code (without change) from the tutorial. But the…
-
**Is your feature request related to a problem? Please describe.**
This is a solution looking for a problem but the results might be interesting.
**Describe the solution you'd like**
https://…
-
ghost updated
4 years ago
-
# Positional encoding
From paper _Attention is all you need_ is required to implement this feature in order con contibute to the **_Transformers_** milestone.
## Refferences
* [Attention Is A…
-
Hi,
I have a question about data format. If an edge or a node is composed of multiple words, what should be used to concatenate them? A hyphen?
Thanks!
-
For sequence encoding tasks like NLI and NMT, the encoder gives vector representation for sentence. My doubt is that, are we supposed to use pre-trained embeddings of words to get these sentence embed…
-
Try:
- [ ] [SGT](https://github.com/cran2367/sgt/tree/master/python)
- [x] [Word2Vec](https://www.tensorflow.org/tutorials/text/word2vec)
-
I know little about sentence embedding models and I just want a decent sentence embedding to make my RAG work, so I am surprised to see that for **all** Pretrained Models listed in [sbert](https://sbe…