-
https://huggingface.co/TheBloke
-
**Is your feature request related to a problem? Please describe.**
Native Go models.
**Describe the solution you'd like**
I have ported llama2.c into native Go: https://github.com/nik…
-
### Prerequisites
- [X] I am running the latest code. Mention the version if possible as well.
- [X] I carefully followed the [README.md](https://github.com/ggerganov/llama.cpp/blob/master/README.…
-
### 🐛 Describe the bug
I followed the steps from https://github.com/pytorch/executorch/blob/main/examples/models/llama2/README.md.
The installation steps are as follows:
git clone https://git…
-
-
当我从huggingface上下载模型llama2-7b-chat-truthX后 运行text.py时会出现如下错误,请问该如何解决?
-
1. Download weights from [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf).
2. Combine weights into a single safetensors file.
3. Convert safetensors to llama2-7b…
-
I was wondering how to understand this. I would expect llama2 70b to have a lower throughput.
Is the configuration different between the table for llama2 70b and the table for llama2 7b.
-
感谢作者的工作,提供了一个解决 cl 灾难性遗忘的思路。
我采用 codebase 提供的 llama2 的脚本,跑出来的结果直接坏掉了,这是什么原因呢,跑实验的过程中,有什么要点需要注意么,或者参数设置上需要做些什么调整呢?是 olora 的 lamda 参数设置太小导致过多的遗忘么?下面是我在 tune order2 时的逐 task 结果
***** predict metrics **…
-
### Before submitting your bug report
- [X] I believe this is a bug. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [X] I'm not able to find an [open issue]…