-
I want to propose the use of [semantic versioning](https://semver.org/) for this project. It would allow users to depend on the latest release of any major version without the risk of breaking their i…
-
@ggerganov [retweeted](https://twitter.com/Vermeille_/status/1675664118500454400) the "Stay on topic with Classifier-Free Guidance" paper that came out showing that "Classifier-Free Guidance (CFG)"...…
-
Unable to build on Linux. Same error observed on Twitter - https://twitter.com/jarredsumner/status/1767348346023477343
```
error: failed to run custom build command for `mozjs_sys v0.68.2 (https:/…
-
After having installed mojo (working), and llama2 as described, running `mojo llama2.mojo` on ubuntu 22.04 with 16 cores, I get:
llama2.mojo $ mojo llama2.mojo
num hardware threads: 16 SIMD vec…
-
>starlette.websockets.WebSocketDisconnect: 1001
INFO:Loading TheBloke_Llama-2-13B-chat-GGML...
INFO:llama.cpp weights detected: models/TheBloke_Llama-2-13B-chat-GGML/llama-2-13b-chat.ggmlv3.q6_K.bin…
-
# Expected Behavior
I'm trying to fine tune a model on a AWS g5 machine with Nvidia A10 card but I'm getting seg fault.
# Current Behavior
Same files are processed in macos M1 and it works fi…
-
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 7
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: …
-
Tested with 019ba1dcd0c7775a5ac0f7442634a330eb0173cc
Model https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca/tree/main converted and quantized to q8_0 from scratch.
In case of mistral openorc…
-
I am comparing the tokenization of the codellama repository with the infill example of this repository.
The first example prompt from the codellama repository consists of the strings:
- Prefix: …
-
Edge 112.0.1722.54-1 stable fails to launch on Fedora 38 installed from the Fedora Flathub Selection Repo, possibly a problem on their part.
`flatpak run com.microsoft.Edge --verbose
Stub sandbox…