linkedin / Liger-Kernel

Efficient Triton Kernels for LLM Training
BSD 2-Clause "Simplified" License
2.92k stars 144 forks source link

[fun] llama.triton #119

Open ByronHsu opened 2 weeks ago

ByronHsu commented 2 weeks ago

🚀 The feature, motivation and pitch

@thomwolf and i have an idea to implement llama from scratch in pure triton, inspired by karpathy. liger kernel already contains most of the kernels except matmul. We would love to call out for any interested! It can be added under our example/ folder!

Alternatives

No response

Additional context

No response

ziliangpeng commented 2 weeks ago

omw

thevasudevgupta commented 2 weeks ago

i implemented gpt-2 in triton few days back. Ig llama would be similar- just need to implement some specific layers.

sharing if someone wants the starting code!

vigneshbp commented 2 weeks ago

@thevasudevgupta Could you please share the specific code so that I can directly look into it ?

thevasudevgupta commented 2 weeks ago

ohh; I forgot to link it. sorry;

https://github.com/thevasudevgupta/gpt-triton

kerthcet commented 2 weeks ago

Do you guys think a triton based inference engine would be a good path?

ByronHsu commented 2 weeks ago

@kerthcet no we want to do training here. triton based inference already has too many options like vllm

ghostway0 commented 2 weeks ago

about the mm kernel, I wrote something like that, if that interests anyone

if the todos there are fixed, I think a pr would make sense?