-
It would be nice to add this metric because Word Movers Distance uses information geometry and ELMo uses contextual embeddings. I'm not familiar with all of your metrics so please correct me if there …
-
**Feature**
Let suppose one has at hand a textual corpus with a split in distinct time periods. One may want to analyze how word embeddings change across time.
**Describe the solution you'd like**…
-
### Describe the bug
Thank you for developing and maintaining this invaluable module!
We would like to learn a multi-task model on two NER tasks by sharing a transformer word embedding.
We fine-t…
-
Thanks for sharing this work firstly.
I test this code with a reference code, but I got a results as not I expected. As concerned as the similariy it's far away from InstantID performance.
Furtherm…
-
I've been using sentence-transformers for a little while and I love it - thanks for your great work! Out of the box I've been getting the best results for sentence similarity tasks with the pre-traine…
-
We are able to download the granite model using below command
ilab download --repository instructlab/granite-7b-lab-GGUF --release main --filename granite-7b-lab-Q4_K_M.gguf
ilab generate is worki…
-
Suggest creating a GitHub Actions workflow to run pytest on commits and pull requests.
Steps:
1. Create a new file .github/workflows/test.yml
2. Add configuration to test.yml to configure the e…
-
I'm passing a list of approximately 700 articles to the default embeddings function as follows:
```
topic_model = BERTopic()
window_result = topic_model.fit_transform(d)
```
where…
-
Hi! Thanks for your contribution. It is an excellent piece of work!
I would like to ask why you chose a randomly-initialised Transformer decoder with six layers? Do you have any relevant literature…
-
My task is about **multi-label text classification**.
Firstly, I use pretrained **monolingual BERT** for word embeddings for my model, it works fine with accuracy ~50% and top-3 accuracy ~80%.
Now,…