-
Is tiktoken supports **meta/llama2-70b**?
I want to find the token count of the prompt before passing the prompt to a **meta/llama2-70b** model
So how to do this with tiktoken
```
import tiktok…
-
Implement a demo for Llama-2 7B variant following the code at `https://github.com/facebookresearch/llama/tree/main/llama` and using weights from `/mnt/MLPerf/tt_dnn-models/llama-2/llama-2-7b`.
This …
-
```
import ollama from 'ollama'
const message = { role: 'user', content: 'What color of the sky?' }
const response = await ollama.chat({ model: 'llama2', messages: [message], stream: true })
for…
-
### System Info
using 3090 and the docker image produced by the QuickStart Doc
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [X] My own modified scripts
##…
-
Can we have support for Mistral 7B /Llama2 models as current implementation only supports Openai models.
As I was trying to use Mistral model via vLLM and it breaks now in vllm invocation layer.
…
-
Hi everyone,
I am fine-tuning the llama2, but the loss is declining very slowly, and I am a little confused about the reason. Prior to this, I had fine-tuned the llama1 and the loss dropped signif…
-
Hello,
While applying quantization on Llama model, we first convert weights downloaded from Meta and then use huggingface converter and then apply huggingface compatible AWQ quantization.
Is there…
-
hi there,
is there any example (if possible) to use Llama2 for image captioning ?
thank you
-
Hi, I notice in your paper there are only 4W4A LLaMa1 results on six zero-shot datasets. May i ask do you have 4W4A LLaMa2 results on zero-shot tasks. Thanks.
XA23i updated
3 months ago
-
### Please check that this issue hasn't been reported before.
- [X] I searched previous [Bug Reports](https://github.com/OpenAccess-AI-Collective/axolotl/labels/bug) didn't find any similar reports.
…