-
### Your current environment
PyTorch version: 2.1.2+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (U…
-
### Describe the issue
When running my onnx model on C++ in CPU everything works perfectly. However, when running it with the Cuda provider it throws this error:
```
2024-06-13 16:21:29.9651844…
-
### System Info
-CPU: x86
- Memory: over 300G
- GPU: 8 x V100
- No IB, No nvlink, NCCL use socket for communication
Driver:
```
+--------------------------------------------------------------…
-
Is there any work related with CUDA 12 support soon?
Type of Issue
Question
-
**Describe the bug**
The spec is somewhat vague about the behavior of `is_compatible` :
> A kernel that is defined in the application is compatible with a device unless:
> • It uses optional fe…
-
### Describe the issue
It works well when it run in GPU,but it has a bug when it terminates
terminate called after throwing an instance of 'onnxruntime::OnnxRuntimeException'
what(): /onnxruntim…
-
Hello, I am using an NVIDIA Jetson device which has unified CPU/GPU memory, and I'm trying to eliminate unneeded CPUGPU memory copies. I noticed there are `kDLCUDAManaged` and `kDLCUDAHost` device ty…
-
### 🐛 Describe the bug
I use cudagraph to capture send/recv but it fails.
The code is run with `torchrun --nnodes=1 --nproc_per_node=2 test.py`
```
import os
import torch
import torch.distr…
-
[output_and_error.log](https://github.com/togethercomputer/OpenChatKit/files/13232768/output_and_error.log)
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce…
-
Collecting environment information...
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC ve…