-
I modified the codebase a little:
```
"""code for zero shot instruction parsing"""
import torch
from peft import PeftModel
import transformers
import textwrap
from transformers import AutoMod…
-
- [ ] [SWE-bench/README.md at main · princeton-nlp/SWE-bench](https://github.com/princeton-nlp/SWE-bench/blob/main/README.md?plain=1)
# SWE-bench README
| [日本語](https://github.com/p…
-
CPU: i5-1335U
RAM: 16GB
OS: Ubuntu 22.04.10
Kernel: 6.8.0-45
logs:
```txt
2024/09/30 15:34:06 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBL…
-
# Prerequisites
I am running the latest code. Development is very rapid so there are no tagged versions as of now.
I carefully followed the [README.md](https://github.com/abetlen/llama-cpp-python/b…
DDXDB updated
7 months ago
-
### Description of the bug:
- using generative/example/tiny_llama/convert_to_tflite.py to transfer model to `*.tflite, (no quantize)`
- using text_generator_main.cc to load `tiny_llama_seq512_kv102…
-
# Expected Behavior
I tried to install llama via poetry and it didnt work
# Current Behavior
it just prompted some information that i dont understand, tried checking, asked for help and it …
-
So I finetuned a model using a custom dataset. The output should be in JSON format. All the keys are the same for each output, i.e. structure of the response JSON is the same while values need to be e…
-
Hi,
I tried training llama 3.1 with run_mntp.py but get an obsucre error
`AttributeError: 'LlamaBiModel' object has no attribute 'rotary_emb'`
What is that about ?
-
# Alternative title
How to make a tokenizer behaving similarly to Llama
## Background
Llama tokenizer considers byte_fallback tokens **not special**. When it decodes, it doesn't remove these…
-
Thank you for sharing the model and how to install everything. Worked flawlessly! 🚀
I was wondering if you could provide more guidance on prompts.
Prompts that work well with other popular models lik…