-
### Describe the issue
I try to infer an onnx model in jetson agx orin with Jetpack 6.1. The cuda is 12.6 and the cudnn is 9.3. I find on the website it says onnxruntime-19.0 supports cudnn 9.x but w…
-
### 🐛 Describe the bug
The [`AOTIModelPackageLoader::run`](https://github.com/pytorch/pytorch/blob/b4cc5d38b416c8e74a6ba8f537a75571a3cdd563/torch/csrc/inductor/aoti_package/model_package_loader.cpp#L…
-
### Checklist
- [X] 1. I have searched related issues but cannot get the expected help.
- [X] 2. The bug has not been fixed in the latest version.
- [X] 3. Please note that if the bug-related issue y…
-
makefile is generated by
```
cmake -S . -B build -D GPU_RUNTIME=CUDA
cmake --build build
```
result of running `make GPU_RUNTIME=CUDA`:
```bash
$:~/Code/rocm-examples/HIP-Basic/hello_world_cud…
-
### 🐛 Describe the bug
Execute Triton XPU tutorial:
```
https://raw.githubusercontent.com/intel/intel-xpu-backend-for-triton/refs/heads/main/python/tutorials/01-vector-add.py
python 01-vector-add.…
-
**Describe the bug**
When running using CUDA after launching Julia with Nsight Systems, the program quits, but a profiling report is still generated.
**To reproduce**
The Minimal Working Example (…
-
### Describe the bug
Hey! I am learning to use SYCL but I encountered a little issue when using `sycl::atomic_ref::exchange`. Things work fine on CPU, but when I switched to GPU even a very simple te…
-
### Your current environment
The output of `python collect_env.py`
```text
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch…
-
Nice project!
This issue(post?) records the obstacles and solutions I encountered during the construction process. Hope the maintainer can modify the script after seeing this to make the build proc…
-
### 🐛 Describe the bug
Hi, we are trying to enabling Llama in Intel HPU, but found graph break in `torch.cuda.current_device()` . We have a package GPU migration tool, it replaces `torch.cuda.curre…