-
I hope this is the right place to ask this question. Let me know if I need to move to another repo.
Currently I'm using `NeuronModelForCausalLM`.
I have a use case where I need to be able to do …
-
Since OpenSearch 2.13, [**fixed token length algorithm**](https://opensearch.org/docs/latest/ingest-pipelines/processors/text-chunking/#fixed-token-length-algorithm) is available in text chunking proc…
-
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('m3e-base/')
model = AutoModel.from_pretrained('m3e-base/')
model.eval()
def get_sentence_embeddi…
-
Same here. I was pretraining LlaMA-3.1-7B-Instruct done, and then continue to finetuning w/ QLoRA normally. After 2 epochs, I switched to use Unsloth to continue the finetuning with longer context (80…
-
I have down model weights in my computer,but i don't how to use LOCAL LLMS and Embeddings for ragas according to
Here is My code but it did't work
```
import typing as t
import asyncio
from ty…
-
transformers 4.41.2
optimum-quanto 0.2.1
torch 2.3.1
Python 3.10.14
I performed this on a recent google GCP VM with Nvidia driver setup and basic torch sanity test passing.
I tried to quant…
-
CHANDRA got me thinking about the new `text-embedding-preview-0815` model to upgrade from `text-embedding-004`. However https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/off…
-
~/# accelerate launch train_stage_2.py --config configs/train/stage2.yaml
The following values were not passed to `accelerate launch` and had defaults used instead:
`--num_processes` was set…
-
```bash
pretrained_model=/XXX/bge-large-zh-v1.5`
raw_data=/XXX/toy_finetune_data.jsonl
after_data=/XXX/toy_finetune_data_minedHN.jsonl
python3 -m FlagEmbedding.baai_general_embedding.finetune.hn…
-
Hi, I'm interested in your work and try to reproduce it, but there are some details need to be confirmed.
The first one is the implementation of MVAE. The paper said,
> We copy the network archi…