-
Tried this out today and it fails on my (ARM) AGX Orin on 22.04/Jetpack 6 due to CTranslate2 missing CUDA support. Docker logs:
```
Traceback (most recent call last):
File "/usr/lib/python3.10/…
-
### Add Link
https://pytorch.org/tutorials/beginner/text_sentiment_ngrams_tutorial.html
### Describe the bug
Running the text classification tutorial line by line results in the following:
/pyt…
-
### Describe the bug
Installed ucx-1.16 and everything was working fine. The devices/transports recognized are inline with the expectation. Installed OFED (MLNX_OFED_LINUX-24.04-0.6.6.0-rhel8.9-x86_6…
-
```
#0 0x00007bc0622c6554 in std::_Rb_tree_increment(std::_Rb_tree_node_base const*) () from /lib/x86_64-linux-gnu/libstdc++.so.6
No symbol table info available.
#1 0x00007bc05573e59a in cub::Cac…
-
Hello, I am trying to run this on CUDA 12.4 with PyTorch built from source for it but get the following error:
```
Traceback (most recent call last):
File "/mnt/generative-recommenders/train.py…
-
### Your current environment
```text
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC ve…
-
Hi there, thanks for this package, it's really helpful!
On a cluster with multiple GPUs, I have my model on device `cuda:1`.
When calculating FID with a passed `gen` function, new samples are …
-
I saw error message when I am trying to do supervised fine tuning with 4xA100 GPUs. So the free version cannot be used on multiple GPUs?
RuntimeError: Error: More than 1 GPUs have a lot of VRAM usa…
-
### Checklist
- [X] I added a descriptive title.
- [X] I searched through [existing issues](https://github.com/ContinuumIO/anaconda-issues/issues) and couldn't find a solution or duplicate issue.
…
-
**PS > npx --no node-llama-cpp download --cuda**
Repo: ggerganov/llama.cpp
Release: b3197
CUDA: enabled
✔ Removed existing llama.cpp directory
Cloning llama.cpp
Clone ggerganov/llama.cpp (loca…