intel / intel-xpu-backend-for-triton

OpenAI Triton backend for Intel® GPUs
MIT License
124 stars 35 forks source link

Unskipping test_flash_attention.py breaks regular runs #1109

Open leshikus opened 4 months ago

leshikus commented 4 months ago

The commit https://github.com/intel/intel-xpu-backend-for-triton/commit/e47fd9537aeb313cd048393ac50d71b25993ec56 enables several tests, this breaks conda basekit run.

See the log at https://github.com/intel/intel-xpu-backend-for-triton/actions/runs/9056251421/job/24878516034

whitneywhtsang commented 4 months ago

Those test_flash_attention tests are enabled by 1e4fb6497c1e2c55b738a842d05f14ae995e07c9 and not e47fd9537aeb313cd048393ac50d71b25993ec56. @leshikus Are you sure the failures are caused by e47fd9537aeb313cd048393ac50d71b25993ec56?

leshikus commented 4 months ago

Whitney, probably you are correct. Both commits happen at the same day, I've just started noticing failures

leshikus commented 4 months ago

@whitneywhtsang thank you for the help with the owner

etiotto commented 4 months ago

@leshikus they pass without using the conda env. so can you explain what is the difference the conda env. introduces ?

leshikus commented 3 months ago

@etiotto the conda env means the different environment, namely, different libraries and different library paths; in some cases when a compiler is called and the problems can happen due to different header resolution, but this particular problem looks more like a linking problem to me

vlad-penkin commented 1 month ago

@leshikus can we close this ticket?