intel / intel-xpu-backend-for-triton

OpenAI Triton backend for Intel® GPUs
MIT License
144 stars 44 forks source link

Enable the SLPVectorizer on Triton side with no scheduling #2714

Closed chengjunlu closed 1 week ago

chengjunlu commented 1 week ago

The IGCVectorizer doesn't support the flash attention kernel so far. To enable the SLPVectorizer on Triton side with no scheduling in basic block will get better performance for now.

etiotto commented 1 week ago

@chengjunlu did you close this PR intentionally? If so what is the alternative ?

chengjunlu commented 1 week ago

@chengjunlu did you close this PR intentionally? If so what is the alternative ?

The changes in PR is not ready. It is only functional for flash attention. I'd like to add the new changes for the vectorizer on Triton side in parallel with the IGCVectorizer enhancement.