Open jmalfara opened 1 year ago
Commenting out #L7071 stops this error but im still curious as to what instruction wasn't supported 🤔
Commenting out #L7071 stops this error but im still curious as to what instruction wasn't supported 🤔
This actually amounts to burying one's head in the sand because you've only eliminated the error message, but the error itself still exists. You can set CUDA_ARCH_FLAG=all
in Makefile to solve this problem.
I have a NVIDIA GeForce GTX 860M, and I am suddenly having the same issue since the last pull. but it has worked fine before the update. I forced CUDA_ARCH_FLAG to all and the error persists.
EDIT: yep falling back to commit fa8dbdc
[1.4.0] and the gpu works perfectly.
Commenting out #L7071 stops this error but im still curious as to what instruction wasn't supported 🤔
This actually amounts to burying one's head in the sand because you've only eliminated the error message, but the error itself still exists. You can set
CUDA_ARCH_FLAG=all
in Makefile to solve this problem.
I didn't notice the exit(1)
. That makes way more sense compared to a print line causing a crash...
I have a NVIDIA GeForce GTX 860M, and I am suddenly having the same issue since the last pull. but it has worked fine before the update. I forced CUDA_ARCH_FLAG to all and the error persists.
EDIT: yep falling back to commit
fa8dbdc
[1.4.0] and the gpu works perfectly.
Forcing CUDA_ARCH_FLAG still results in the problem for me as well. What interesting in my case is fa8dbdc
[1.4.0] doesn't work on my machine with docker, even with cpu only. There are no errors it just exits. I'll continue to investigate but at least i'm not the only one who saw this issue.
I don't mean to poke, but this is still an issue. for context I am using an NVIDIA GeForce GTX 860M.
Does it work if you apply this patch?
diff --git a/ggml-cuda.cu b/ggml-cuda.cu
index b420330..9da239a 100644
--- a/ggml-cuda.cu
+++ b/ggml-cuda.cu
@@ -96,7 +96,7 @@
// - 7B quantum model: +100-200 MB
// - 13B quantum model: +200-400 MB
//
-//#define GGML_CUDA_FORCE_MMQ
+#define GGML_CUDA_FORCE_MMQ
// TODO: improve this to be correct for more hardware
// for example, currently fails for GeForce GTX 1660 which is TURING arch (> VOLTA) but does not have tensor cores
Does it work if you apply this patch?
The first commit with this issue is f96e1c5b7865e01fece99f69286d922d949a260d (#1422). That patch doesn't help.
Its an old card I know but hopefully there is something that can be done.
https://github.com/ggerganov/whisper.cpp/blob/master/ggml-cuda.cu#L7069-#L7071
There seems to be an issue on Maxwell cards not supporting some type of function in Cuda. Im not sure exactly what instruction is not supported but maybe someone can provide some insights?
In this sample I manually disabled the tensor cores by forcing GGML_CUDA_FORCE_MMQ but the issue still exists
An important thing to note is that I compiled the library on a device with a 3070. That could likely be a root cause