Closed vlad-penkin closed 6 months ago
@etiotto this is first part of the issue scope, second is to start running this benchmark suite regularly.
@alexbaden could you please update this issue and provide more details in description?
Hi, I think we may address some block issues firstly to enable E2E test in llvm-target. You can get some explanation from https://github.com/intel/intel-xpu-backend-for-triton/issues/224
PyTorch team will provide a workable version to match current Triton master API.
When?
@tdeng5 I am reassigning this issue to you and your team since they have been working closely with IPEX on benchmark enabling. If you can produce a list of benchmarks that do not run and create an issue for each, I would be happy to help with resolving the discovered issues once all benchmark suites have been run.
TorchBench benchmark suite can run now, we are filing issues to track for the failed models.
@vlad-penkin can we clarify in the description what this work item entails ? I see that in Triton (https://github.com/openai/triton/blob/main/.github/workflows/torch-inductor-tests.yml) we have this:
Is that what you have in mind for this work item?
@alexbaden your opinion please.