-
-
Have you thought of adding ELMo word embeddings to the selection? It outperforms both GloVe and Word2Vec embeddings.
Link to ELMo: https://github.com/allenai/allennlp/blob/master/tutorials/how_to/e…
-
hello, I have a problem: reviews = list(review_data[2]) reviews = reviews[:5000] # only consider the first 5k reviews
IndexError: boolean index did not match indexed array along dimension 0; dimen…
-
Given a directory of articles in the desired input format (see #2) generate the word embeddings to be used in document ranking.
- [ ] What software package to use.
-
@angelo337 para espanol ?
-
Hello
I need to download the "lexsub_context_embeddings.txt" and "lexsub_word_embeddings.txt" for lexical substitution ranking.
But the link is not working.
www.cs.biu.ac.il/nlp/resources/download…
-
Hello, I have trained a Bert with vocab_size 21128, and I noticed that in BLIP the vocab_size should be 21130 (including 2 additional tokens:DEC,ENC). However, this difference caused a shape conflict …
-
I'm running insanely-fast-whisper in environment where low latency is crucial. Once .wav file is created, it needs to be transcribed immediately. Everytime I run:
```
D:\InsanelyFastWhisper>insane…
-
Thank you for sharing such an interesting idea!
> Since there is no longer a modality gap in the embeddings, we can transfer the single modality representation capabilities to multimodal embeddi…
-
I am trying to work through your 'Clustering and Visualizing Documents using Word Embeddings', Readers and Williams. I appreciate making Jupyter note books available, but I am hoping that I can run di…