OpenAccess-AI-Collective / axolotl

Go ahead and axolotl questions
https://openaccess-ai-collective.github.io/axolotl/
Apache License 2.0
6.77k stars 744 forks source link

[BOUNTY] Optimized Triton Kernels for full fine tunes #1038

Open winglian opened 5 months ago

winglian commented 5 months ago

🔖 Feature description

We've seen marketing from Unsloth that optimized triton kernels for various operations can significantly improve both the speed and memory efficiency of fine-tuning LoRA adapters as well as full fine tunes. However, only the LoRA triton kernels are open-source. We are awarding up to $600 bounty each for optimized triton kernels that are compatible with FlashAttention 2.0 for the following model architectures:

  1. Llama - $350
  2. Mistral (w Sliding Window Attention) - $250 (should be tackled after Llama since the only change from llama is SWA)
  3. Mixtral MoE - $600

[!IMPORTANT] EDIT: bounty has been doubled to $700, $500, and $1200 respectively thanks to a bounty match

To be eligible for a bounty, the submission into axolotl must be open sourced under Apache 2.0, support single and multi-gpu finetuning, include unit tests, support both regular full finetuning and full fine-tuning with multipack. Kernels should include the forward and backward passes for the MLP and attention modules. For Mixtral additional kernels required are both sparse and grouped permute_and_compute as well as kernels for gating experts.

[!IMPORTANT] EDIT 2024-01-03: Optimized is defined as at least a 15% time improvement over the current Flash Attention implementation, and a 25% memory improvement over the current Flash Attention implementation

hamelsmu commented 5 months ago

Parlance Labs is matching @winglian's bounty. So it's

  1. Llama $700
  2. Mistral $500
  3. Mixtral MoE $1200
casper-hansen commented 5 months ago

For those looking for inspiration to claim the $1200 for Mixtral:

soumith commented 5 months ago

@winglian I suggest you put a targeted speedup, on what qualifies for "optimized". Who knows, maybe torch.compile used the right way can generate your definition of "optimized" :) and someone from the PyTorch community can attempt something like that (similar to the gpt-fast work we've been doing for inference)

winglian commented 5 months ago

@winglian I suggest you put a targeted speedup, on what qualifies for "optimized". Who knows, maybe torch.compile used the right way can generate your definition of "optimized" :) and someone from the PyTorch community can attempt something like that (similar to the gpt-fast work we've been doing for inference)

Thanks, I've added additional clarification on that to the original post.

kostum123 commented 5 months ago

I am not saying that this task is easy and the goals are simple, but if the accelerations in training time and decreases in VRAM memory usage promised by Unsloth in paid plans are real, let's assume that they are, the necessary conditions for receiving the reward remain low. Why don't we aim for a higher level of requirements? Should we aim for at least half the speedup and memory usage rates that unsloth promises? 25% time improvement over the current Flash Attention implementation, and a 30% memory improvement?

jedreky commented 5 months ago

Hi, I think this is a great initiative! When you talk about the "current flash attention implementation", could you perhaps specify the exact tech stack and version that you have in mind? In fact, it might also be useful to specify the desired hardware. I think this would make the rules of the competition really clear-cut.

Itssshikhar commented 5 months ago

Hi! I think this is a good opportunity for those trying to get deep into LLMs. It would be really helpful if you can explain what to do on a High Level basis to get started. Thanks

Mistobaan commented 5 months ago

Also specify for those that land directly to this page that is about the Triton Lang not the Triton Server. Any particular GPU architecture to target as preference (A100 / H100)? Are there benchmark of the current kernel speed? (should create those to see the baseline).

casper-hansen commented 5 months ago

For Mixtral additional kernels required are both sparse and grouped permute_and_compute as well as kernels for gating experts.

Here is my answer specific to Mixtral. Solutions that achieve a speedup on both A100 and H100 should be accepted. You would have to implement a sparse kernel on A100 and a grouped kernel on H100.

I think @winglian should provide a baseline axolotl config. Perhaps one for short context and long context datasets.

casper-hansen commented 5 months ago

Triton kernel for expert computation in MoE compatible with float16 and bfloat16. Speed up of 2.3-5x dependent on batch size. You would just need to make it compatible with axolotl and implement the backward pass.

If this can be implemented in axolotl for Mixtral, you could likely claim the $1200.

https://github.com/vllm-project/vllm/pull/2453

kno10 commented 2 months ago

@unslothai @danielhanchen would you open-source your kernels to claim the bounty?