-
Hi!
I'm interested in solving a classification problem in which I train the model on one language and make the predictions for another one (zero-shot classification).
It is said in the README for …
-
As I first heart about BERT I was thrilled. But still you need labeled data when fine-tune the model, right? Nevertheless, this will be better and you need far less data now than most of the algorithm…
-
### Model description
Hey,
as discussed with @NielsRogge a few weeks back, I'd like to work on adding the "VATT: Transformers for Multimodal
Self-Supervised Learning from Raw Video, Audio and Text"…
johko updated
7 months ago
-
As part of the progression of machine learning components with increasing levels of sophistication, implement version 3 ("dsmvp-v3") with the following characteristics:
Explainable Model: A machine…
-
Hello,
(Cross posting this between [SetFit](https://github.com/huggingface/setfit/issues/500) and sentence-transformers)
We're investigating the possibility to use SetFit for customer service me…
-
We have been experimenting with different setups for the task of news source verification. Our first approach trained on sentence pairs from same and different source domains with cosine loss. For ver…
-
Hi,
I am going to fine-tune the 'cross-encoder/ms-marco-MiniLM-L-4-v2' for the re-ranking of top documents in my retrieval schema. I have followed the instructions for the example in https://github.…
-
Hey, I was looking for a while if there is some way I can have a "suffix alias" which are available in unix shell like zsh? For example, we can set an alias like below:
```bash
alias -s git="git c…
-
I have a text classification problem where I need to classify text into one of 4 categories. I would like to use sbert but read that crossencoder only takes pair input.
How do I go about doing this…
-
when the input is a question, the reply shouldn't begin with what the input was.