-
Hello Artem. First, I want to thank you for the great app! Exactly what I'm searching for last week! It's cool we can import gguf models and you let us customize some settings!
My device: iPhone 15PM…
-
Hi all, not sure if this is the right place to discuss / ask:
Could a SoC project be proposed to train or fine-tune a LLM on Scala code?
Most LLMs are primarily trained on Python code, just because …
-
In the Mistral Colab notebook, Mistral 7b Instruct is missing the extension **-bnb-4bit** in the 4-bit model list:
```
# 4bit pre quantized models we support for 4x faster downloading + no OOMs.
…
-
I modified the code to support the Codellama-34b model, but when using lwc and let simultaneously, the following error occurred:
`
Traceback (most recent call last):
File "main.py", line 380, i…
-
Hi, I'm facing issues with this plugin.
I am using `lazy.vim` and lazy installed the plugin properly.
But when i type `:Gen` it shows this error: `E492: Not and editor command: Gen`
Details:
- N…
-
Running in a docker container. After this, the subsequent api requests returned 'Internal server error'
```INFO 09-28 06:39:29 llm_engine.py:613] Avg prompt throughput: 0.0 tokens/s, Avg generation…
-
llama.cpp version llama-b2050-bin-win-avx2-x64
version: 2050 (19122117)
Windows 10
Running on AMD 3900x CPU
Command: `server --threads 23 --ctx-size 16384 --mlock --model models\phind-codellama-…
-
Operating System: Windows
GPU: NVIDIA with 6GB memory
Description:
While switching between Mistral 7B and Codellama 7B, I noticed a decrease in GPU available memory for layers offloaded to the GP…
-
### What happened?
form: https://github.com/danielmiessler/fabric/issues/272#issuecomment-2028275846
> Our installation issues have dropped massively since switching to pipx
Still does not inst…
gwpl updated
2 months ago
-
Hello
I have been testing llamacpp with ubuntu 22.04 and rocm5.6 it took me about 3 months to setup multigpu one rx6900 two rx6800 and one rx 6700 all together running on pcie x1 gen1.
![image](…