-
**Problem**
Lack support for VNNI
AVX-512 Vector Neural Network Instructions (VNNI)
**Success Criteria**
**Additional context**
-
### Describe the feature request
Wasm Relaxed SIMD includes integer dot product instructions, which will map to VNNI instructions on X86-64 platforms with AVX-VNNI (on ARM maybe SDOT, but I haven't t…
-
I've been working on securing the user input, escaping invalid characters, however I've encountered a few prompts which cause the llama-cli to abruptly halt:
```
.\llama-cli.exe --model "..\..\..\mod…
-
### What happened?
I use -i -if and the flags are ignored, and it exists with "input is empty"
lama_new_context_with_model: graph nodes = 2246
llama_new_context_with_model: graph splits = 1
co…
-
Some Intel Xeon server CPUs (for example _Xeon Platinum 8171M_ or _Xeon Platinum 8272CL_) support VNNI instruction. Is this something which chould be used for better performance or it is not suited fo…
-
### Background and motivation
There already is support for AVX VNNI hardware instruction set with support for 128-/256-bit vectors and it would be good to have same support for 512-bit vectors. (ve…
-
The WASM [Relaxed SIMD](https://github.com/WebAssembly/relaxed-simd) instructions were stabilized in [Rust v1.82](https://blog.rust-lang.org/2024/10/17/Rust-1.82.0.html#stabilized-apis).
This inclu…
-
I am trying to build ANTsR in a relatively old hardware that doesn't support AVX_512 (the cpu is xeon v2). During R CMD INSTALL ITKR it is failing at this step:
```
[ 22%] Building C object Modules/…
-
I am running ollama on i7-14700K, which supports AVX2 and AVX_VNNI, and a GeForce RTX 1060.
After reading #2205, I enable `OLLAMA_DEBUG=1` to check if ollama utilize AVX2 of this CPU. But unlike th…
-
### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf 2.17
### Custom code
Yes
### OS platform and d…