-
### What happened?
When using llama.cpp models (e.g., granite-code and llama3) with Nvidia GPU acceleration (nvidia/cuda:12.6.1-devel-ubi9 and RTX 3080 10GB VRAM), the models occasionally return nons…
-
Thanks for your great work!
Could you kindly advise on how to support the models in the LLaMA series?
-
Example:
```
SUPPORT_BF16=0 CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6 python3 examples/llama3.py --download_model --shard 7 --size 8B
seed = 1730782018
0%| | 0/292 [00:00
-
I'm now trying to train llama3.1 with GRIT pipeline.
At first I directly change ``--model_name_or_path`` and run the training code (the training script I used is as follows)
```
#!/bin/bash
#SB…
-
## TODO Templates
- [x] Athene V2 (36e9ae2877beefca51101176202761f34bce0b8e)
- [x] ChatML (ccd461ac30c116110a7adda507ce56596fefb1ca)
- [x] LLaMa2 (9be35a3c689dde0dfce0f497d83c7e5c3b606dc3)
- [x] L…
-
Hi @lea-33 ,
how about introducing another LLM endpoint: [ollama](https://ollama.com/)? There were recently new vision-models published, namely [llama3.2-vision](https://ollama.com/library/llama3.2…
-
I followed the [steps](https://github.com/meta-llama/llama3) of getting access to the models; I received a link. But I am getting this error after I ran:
`torchrun --nproc_per_node=1 example_chat_…
-
[ './andy.json' ]
Starting agent with profile: ./andy.json
Starting agent initialization with profile: ./andy.json
Initializing action manager...
Initializing prompter...
Using chat settings: { m…
-
### What is the issue?
Hi,
Tools support doens't work as expected i Guess. When activated it gets the right function to be called, but at the same time it doesn't return anymore normal response fo…
-
**Describe the Feature**
Hi @shahules786 @jjmachan, I'm back in Ragas business ^^
I've recently stumble upon a `Failed to parse output. Returning None.` while trying to evaluate the faithfulness o…