-
from llama_cpp import Llama
llm = Llama(
model_path="C:\\Users\\ArabTech\\Desktop\\4\\phi-3.5-mini-instruct-q4_k_m.gguf",
n_gpu_layers=-1,
verbose=True,
)
output = llm(
"Q:…
-
i know amd support was added when you dont have one. is it possible to implement intel's igpu?
-
I've got a 9900X. It detects AMD Granite Ridge, but when selecting use this GPU there's a runtime error causing the script to panic and exit. My second GPU is a Radeon RX 6700XT.
panic: runtime err…
-
### Rules
- [X] I made myself familiar with the Readme, FAQ and Troubleshooting.
- [X] I understand that, if insufficient information will be provided, my issue will be closed without an answer.
###…
-
i am using a laptop with i7-1260P intel iris xe. i follow the instruction of sriov. when i install intel driver from intel web and restart, windows cannot use gpu and show code43. I am using pop os wi…
-
-
**Is your feature request related to a problem? Please describe.**
I'm currently using frigate in an unprivileged LXC, and so I cannot see iGPU stats. I was wondering if iGPU stats could be included …
-
I have confirmed the device ID using nvidia-smi, nvtop seems to get the correct device when configuring it via the UI, yet when configured like below it shows absolutely nothing, prioritising the inte…
-
### Problem Description
if igpu is enabled it shows up as /sys/class/drm/card0 while rocm-smi does not report this it does however try to write commands to it when it should be writing to /sys/class/…
-
Hello,
I can successfully running Ollama in iGPU of Core i7 13700K, but the perfomance is half of the CPU. Is it possible to use CPU and iGPU together so that we can get 1.5x performance than CPU o…