-
torch.__version__ = 2.1.0+cu121
Compiling cuda extensions with
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Fri_Sep__8_19:17:24_PDT_2023
Cuda com…
-
### What is the issue?
My cards is w7900, and rocm driver is 6.3 , I found the llama-cpp server started by Ollama always without -fa flag.
I check the code , found :
…
-
**Is your feature request related to a problem? Please describe.**
I am trying to run ldview from python script in nvidia/cuda:12.5.0-runtime-ubuntu22.04 docker image on NVIDIA Tesla T4 in AKS. I hav…
-
Does this code support code 12?
I recently updated my cuda and I'm no longer able to run my code.
Thank you
-
### Contact Details
_No response_
### What happened?
I just downloaded [Meta-Llama-3.1-8B-Instruct.Q5_K_M.llamafile](https://huggingface.co/Mozilla/Meta-Llama-3.1-8B-Instruct-llamafile/blob/m…
-
### Voice Changer Version
voice-changer-windows-nvidia-b2309
### Operational System
windows 10
### GPU
gtx 1660
### CUDA Version
12.6
### Read carefully and check the options
- [X] If you use…
-
Hello, author. What version of cuda and Nvidia driver are you using? tensorflow1.14.0 officially supports cuda 10.0, torch1.11.0 officially supports cuda 10.2, but the 10.2 version does not support te…
-
hi. gettin an error. on my ADA RTX 4000 machine that supports BF16 and that runs Stable Diffusion just fine. I get an error on the quantized FLUX update.
running with no model specified or dev or s…
-
Creating a copy of a device array is not trivial, and should be. A couple of current workarounds are:
```python
# Variant 1
@numba.vectorize(['float32(float32)'], target='cuda')
def copy(x):
…
-
Hello,
It appears that this does not support CUDA. Do you plan on supporting it in the future like the regular piper?