-
### System Info
version: 1.0.12
platform: windows
python: 3.11.4
graphics card: nvidia rtx 4090 24gb
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
…
-
Can you make Phi 3.5-MoE version available to software such as LM studio ? (by converting to guff or so?)
-
https://rosepinetheme.com/
-
fresh installation (NVIDIA + CUDA 12.1) with one_click_install.bat in llama_index_pq.
If a try chat, i get:
F:\AI\prompt_quill\prompt_quill\llama_index_pq\installer_files\env\Lib\site-packages\trans…
-
I'm trying to use it in Next.js API action, but getting error:
```
- error ./node_modules/@llama-node/llama-cpp/@llama-node/llama-cpp.darwin-arm64.node
Module parse failed: Unexpected character '…
-
**Is your enhancement related to a problem? Please describe.**
Currently, the installation process does not allow for specifying the CUDA version, the code is hardcoded to use the llama-box binary wi…
-
https://huggingface.co/microsoft/Phi-3-medium-128k-instruct
https://huggingface.co/microsoft/Phi-3-medium-4k-instruct
https://huggingface.co/microsoft/Phi-3-small-8k-instruct
https://huggingf…
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [x] I am running the latest code. Development is very rapid so there are no tagged versions as of…
-
Not sure if this RNN counts as a LLM, but if so would be nice to have it, let me know what needs to be done with packaging.
https://www.rwkv.com/
-
windows build faild while llama-cpp-py worksa