-
I am writing a machine learning software that needs to compute “Y = exp(a⋅X)”.
Sample code:
```c++
#include
#include
void func(float a[]) {
for(std::size_t i = 0; i != 16; i++) {
…
-
### What happened?
```
You are a helpful assistant
> what is 2+2+2+2
44444444444444444444444444444444444444444444444444444444444444444444444444444444444444444
>
```
When I run llama-cli with…
-
- Debugger version (can be found on right hand side of menu bar of debugger).
Jun 19, 2018
- Operating system version and Service Pack (including 32 or 64 bits).
Win10
- Brief description of t…
-
~/llama-node/packages/llama-cpp$ node example/mycode.ts
llama.cpp: loading model from /llama-node/packages/llama-cpp/ggml-vic7b-uncensored-q5_1.bin
llama_model_load_internal: format = ggjt v2 (…
-
### Your current environment
```text
Collecting environment information...
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A…
-
To reproduce:
```
$ git clone -q https://github.com/libjxl/libjxl
$ cd libjxl
$ bash deps.sh
[…]
$ mkdir build
$ cd build
$ cmake .. -G Ninja
-- The C compiler identification is GNU 13.2.1
…
-
https://github.com/xianyi/OpenBLAS#normal-compile says this:
> Simply invoking `make` (or `gmake` on BSD) will detect the CPU automatically. To set a specific target CPU, use `make TARGET=xxx`, e.g…
-
Hi,
we are trying to quantise our onnx models to int8 to run on cpu using : https://onnxruntime.ai/docs/performance/model-optimizations/quantization.html#quantization-on-gpu
we are using dynamic …
-
(Splitting up the related issue #255)
For Thinkpads with i7-1165G7 (so far confirmed P14s Gen 2, P15s Gen2), the `lenovo_fix.py --debug` script reports the following issue:
```
[D] MCHBAR PACKA…
-
Many intrinsics have wrong instruction assertions, e.g. all the AVX512 `mm512_mask_blend_epi16*` check for `vmovdqu16` instead of `vpblend*`, on aarch64 `vget_high_p64` tests for `ldr` instead of `ext…