intel / intel-xpu-backend-for-triton

OpenAI Triton backend for Intel® GPUs
MIT License
118 stars 33 forks source link

[benchmarks] Enable TorchBench benchmark suite #199

Closed vlad-penkin closed 6 months ago

etiotto commented 7 months ago

@vlad-penkin can we clarify in the description what this work item entails ? I see that in Triton (https://github.com/openai/triton/blob/main/.github/workflows/torch-inductor-tests.yml) we have this:

 ./.github/workflows/torch-inductor/scripts/install_torchinductor.sh torchbench
  ./.github/workflows/torch-inductor/scripts/run_torchinductor_perf.sh torchbench

Is that what you have in mind for this work item?

@alexbaden your opinion please.

vlad-penkin commented 7 months ago

@etiotto this is first part of the issue scope, second is to start running this benchmark suite regularly.

@alexbaden could you please update this issue and provide more details in description?

ESI-SYD commented 7 months ago

Hi, I think we may address some block issues firstly to enable E2E test in llvm-target. You can get some explanation from https://github.com/intel/intel-xpu-backend-for-triton/issues/224

tdeng5 commented 7 months ago

PyTorch team will provide a workable version to match current Triton master API.

alexbaden commented 7 months ago

When?

alexbaden commented 7 months ago

@tdeng5 I am reassigning this issue to you and your team since they have been working closely with IPEX on benchmark enabling. If you can produce a list of benchmarks that do not run and create an issue for each, I would be happy to help with resolving the discovered issues once all benchmark suites have been run.

tdeng5 commented 7 months ago

TorchBench benchmark suite can run now, we are filing issues to track for the failed models.

image