-
hello! i am using a x86 machine and running windows 11 try to compile and install triton on it. so i follow the official instruction
```
git clone https://github.com/openai/triton.git;
cd triton/py…
-
### Checklist
- [X] 1. I have searched related issues but cannot get the expected help.
- [X] 2. The bug has not been fixed in the latest version.
- [X] 3. Please note that if the bug-related iss…
-
### Your current environment
```text
Collecting environment information...
/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:611: UserWarning: Can't initialize NVML
warni…
-
Thank you for the great resource!
Can this be installed purely via pip, as python packages or pre-built wheels, without requiring conda? This makes local installs much less messy.
-
After following the 'Building with a custom LLVM' instructions, I build and install Triton successfully; and I could run a simple vector_add triton program.
Then I add one print statement in the Trit…
-
Tried to run the grouped gemm tutorial on Hopper/H100,
https://github.com/openai/triton/blob/main/python/tutorials/11-grouped-gemm.py
I realize this is an experimental tutorial, but I hit this sa…
-
## Description
In addition to having typesense call OpenAI or Google Cloud ML API's, or using the builtin ONNX runtime, it would be _wonderful_ to allow typesense to call custom model serving APIs.…
-
H100 card have been release for a several mouths, but there is little kernel support for their Float8 Tensor core.
-
Currently, the code generation in triton uses python f-string, which has to deal with all the escaping for special characters like `{` and `}`.
Is it possible to use some template engine like [Jinj…
-
I am trying to use DeepSpeed Inference with Diffusers on T4 GPU but it seems there is a triton error.
Reported the bug on DeepSpeed for better tracking: https://github.com/microsoft/DeepSpeed/issue…