-
### brief
**NOTE**: in the default platform, which is x86_64(k8) toolchain , the compile and linking works.
I wonder if it's a bug or just a misconfiguration during usage of this repo?
### envi…
-
Do you have to install the cuBLAS/cuDNN libraries for CUDA 11, or will it also work with the CUDA 12 versions?
`nvidia-smi` says my GPU supports CUDA 12.1, so can I get away with using the CUDA 12 …
-
尝试了很多此,成功安装。环境为:win10,python3.111,torch2.4.1,cuda12.4
***使用CMD***
powershell会失败,不清楚原因。
将储存库clone到本地,然后运行cmd,进入仓库目录
执行
git checkout apex_no_distributed
执行
pip install -v --no-cache-dir ./
终于成功安装
-
### What is the issue?
OS: Ubuntu 24.04 LTS
GPU: Nvidia Tesla P40 (24G)
I installed ollama without docker and it was able to utilise my gpu without any issues.
I then deployed ollama using the f…
-
### What is the issue?
I have deployed ollama using the docker image 0.3.10. Loading "big" models fails.
llama3.1 and other "small" models (e.g. codestral) fits into one GPU and works fine. llama3.1…
-
A recent PR (https://github.com/LLNL/axom/pull/156) adds preliminary support for ``slic`` macros when ``axom`` is configured with ``cuda`` support.
Specifically, it converts all calls to ``SLIC_AS…
-
Please make sure that this is a build/installation issue. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bui…
-
I run jetson-container build ollama and got an error:
Step 8/17 : ENV LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/local/nvidia/compat:${LD_LIBRARY_PATH} CMAKE_CUDA_ARCHIT…
-
**Description**
wgpu does not run when Wayland is detected (when `WAYLAND_DISPLAY` is not empty). Sometimes they work if using the gl backend; for example, ruffle is fine with this, but the examp…
-
How to run it only using CPU as my device does not have a GPU to support CUDA/NVIDIA Drivers