-
I've trained xlora with mistral 7b base model, it works fine. However, when switching base model to llama2 7b, it encountered an error.
This is my code for training.
```
model = AutoModelForCausa…
-
Hello,
I can't run step 4 from the instruction that is available on https://github.com/pytorch/executorch/tree/main/examples/models/llama2
When I run point _2. Build llama runner._ I have an error…
-
https://huggingface.co/TheBloke
-
Hello, @b4rtaz!
I'm trying to run model [nkpz/llama2-22b-chat-wizard-uncensored](https://huggingface.co/nkpz/llama2-22b-chat-wizard-uncensored) on a cluster composed of 1 Raspberry Pi 4B 8 Gb and 7…
-
**Is your feature request related to a problem? Please describe.**
Native Go models.
**Describe the solution you'd like**
I have ported llama2.c into native Go: https://github.com/nik…
-
Do you have the equivalent simple C implementation of LLM but for inference of LLAMA models.
I am trying to build a FPGA accelerator for LLM and a simple reference C code would be very helpful
Thank…
-
Hi, thanks for the useful code for us! I have questions about the accuracy of commonsense reasoning tasks. In the readme, the accuracy of Llama (for example) is
![image](https://github.com/user-atta…
-
Llama.dll that can be downloaded from llama.cpp repo is suitable mostly for programming languages that have abilities to work with rather difficult (for novice coder) concepts like pointers, structure…
-
Hello mlcommons team,
I want to run the "Automated command to run the benchmark via MLCommons CM" (from the example: https://github.com/mlcommons/inference/tree/master/language/llama2-70b) with a d…
-
I got only 9.7% for llama2-7B-chat on human-eval using your script
``` python
{'pass@1': 0.0975609756097561}
```