-
### System Info
```shell
transformers.js@main
```
### Who can help?
@xenova
It is mentioned in that [wav2vec2-bert](https://huggingface.co/docs/transformers.js/main/en/index#models:~:…
-
```
Fetching 16 files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████…
-
## Description
There is a mismatch between Groq's latency and throughput. The latency unit is ms, but the actual value printed seems to be in seconds.
## Reproducing
```
benchit models/trans…
-
Hi thanks for the lib! I want to use some embedding models (arch is bert) from hf hub. I have tried gguf, but the converter says bert arch cannot be converted to that. I have also tried directly have …
-
explain what a llm is in low detail and also tell me about some of the most notable llms and how they differ split your response into 2 sections
-
Suppose we have a BERT NLI model trained to zero-shot classify texts given a prompt such as, for example, 'This text relates to cars.' Is it possible to use DSPy to optimise the prompt for such a mode…
-
Hello,
It would be helpful to include documentation on how to trace a decoder-only transformer model for hosting on Inferentia. Currently, the only documentation that exists is for Encoder-Decoder …
-
To train a mm_grodunding_dino ,we need to load both BERT and Swin two pre-trained models。
To fine-tune a mm_grounding_dino using my dataset, I need to load a pre-trained MM_Grounding_DINO and the con…
-
Hello!
I've noticed that #269 introduces support for BERT-based model merging. I've tried it out on a few that I fancy, and I've been having a few issues.
### My Config
```yaml
models:
- mo…
-
I tried running:
```python
from datasets import load_dataset
from transformer_ranker import TransformerRanker, prepare_popular_models
# Load the WNUT-17 dataset of English tweets annotated wi…