Open danikhan632 opened 1 year ago
You may have missed the community recording, but this topic was discussed I believe, you can find the minutes and recording here: https://github.com/openai/triton/blob/main/docs/meetups/07-18-2023.md
From notes:"currently the plan is to not upstream them directly but have a staging state and different backends can be integrated through a plugin mechanism where Triton provides a layer at the Triton IR layer that is generic and other backends can plug into that"
Ah, I guess I missed the part abt the backend plugins though I'm a bit curious what that would look like, hopefully can be mapped out soon
Hi!
The plug-in interface is already already functional, though it may be slightly broken with the H100 rebase.
I would advise you to develop your backend on a fork of Triton that is sufficiently behind main. Triton-IR is stable and you should always be able to interface with it.
Sounds good, I'll use a fork of triton a bit behind main and look into rebasing once everything on my end is stable, thanks for the info
@danikhan632 have you had any success? Would be really nice to see Triton on M1!
Working on a Apple silicon backend and the project had quite the architecture shift, from my testing as well, cant get rocm to work either as Nvidia-GPUs seem to be the only ones working. Kinda looks like this new structure is pretty baked in with Nvidia. Will the structure get a bit more abstract soon?
I'm guessing repo is a bit stressed dealing with hopper support but no pressure, just don't want to see the M1 support I've been working on down the drain.