-
```
from FlagEmbedding import FlagReranker
reranking_model = 'BAAI/bge-reranker-v2-m3'
reranker = FlagReranker(reranking_model)
if __name__ == '__main__':
reranker.compute_score(['query',…
-
Hi, I get this error when preprocessing text using the mSigLIP model. Any idea what may be wrong? I didn't change anything in the [demo colab ](https://colab.research.google.com/github/google-research…
-
add the tokenization functionality
-
when I using Intel(R) Core(TM) Ultra 5 125H to test, npu is so slowly?
```
install npu driver follow this: https://github.com/intel/linux-npu-driver/blob/main/docs/overview.md
pip install optim…
-
!!! Exception during processing !!! PreTrainedTokenizerFast._batch_encode_plus() got an unexpected keyword argument 'images'
Traceback (most recent call last):
File "D:\ComfyUI-aki-v1.4\execution.…
-
Hi, I trained a sentencepiece tokenizer with prefix match. After convert to HF tokenizer, the tokenization result is not consistent with slow tokenizer.
In sentencepiece, we can choose whether to u…
-
Currently I need to load a tokenizer from HuggingFace, and use it for simply encoding and decoding sentences. While doing that from Transformers.jl interface is awkward already (I had to go `tok = Tra…
-
### Describe the issue
**Issue:**
I ran into tokenization mismatch errors when I tried to fine-tune from Llama-3.1. I pre-trained a new MLP adapter for Llama-3.1 and that seems to work, but the fine…
-
I made a venv, pip installed airllm and then bitsandbytes within that venv, and then copypasted the example python code into `testme.py`. It bailed with the output below:
```
$ python testme.py
…
-
Trying to run with 8GB VRAM
All models appear to load as expected and code runs up until the time the image is passed into the pipeline (ie right up to the inference point)
to avoid OOM issues h…