-
Hi Edward,
First of all, it's a great piece of work created and open-sourced by you! Thanks a lot.
While using Contextual Word Embeddings - say BERT, DistilBERT, when I pass just one word and s…
-
If you currently use word-level embeddings (e.g. fastText), `whatlies` supports embeddings for sentences by _summing_ the individual word embeddings. While this is reasonable default behaviour, its al…
-
Hello!
Since I'm using the server and I don't have the permission of the default cache directory, I always got an error Permission Denied of the default cache directory. Do you have a solution to cus…
-
Hello!
I am using FlauBERT to generate word embeddings as part of a study on word sense disambiguation (WSD).
The FlauBERT tokenizer does not recognize a significant number of words in my corpus…
-
### Requested actions
I kindly ask you to provide an export of a sample of your cluster log files (we have discussed that with @durandom via call). I lead a small Log Anomaly Detection team at AIC, C…
-
A clear and concise description of what you want to know.
Hello! Thank you for the excellent library, in particular the zero shot model. I am constantly impressed by the detailed and useful work co…
-
你好,在Create Contextual String Embeddings with Flair的步骤中,如下代码
charlm_embedding_backward = FlairEmbeddings('your path to /news-backward-0.4.1.pt')
在我的本地路径中是一个空文件,请问还有哪里可以下载到这个文件呢
-
### [Stanford AI Index Report](https://hai.stanford.edu/research/ai-index-2022) ###
- [Ch.1: Research & Development](https://github.com/jungwoo-ha/WeeklyArxivTalk/issues/45#issuecomment-1079926676) (…
-
Try out
- masking/augmentation
- established techniques (TS-DAE, SimCSE)
Use a more-or-less standardized dataset for text similarity (e.g. from benchmarks used in the papers), and try to compare …
-
Hi @lucidrains,
Thanks for this fantastic trove of transformers