Closed stellaraccident closed 6 months ago
@stellaraccident in meeting chat:
If you are actively working on one, please click the "target" hover to the right of any task to create an issue. Then assign yourself. Note the issue in PRs. Discuss prioritization on the tracking issue and details of the op on the op issue.
The list of ops in the issue description is the same as the one @saienduri gathered (in increasing order of appearance, i.e. later ops more important) from parity-bench (I think using https://github.com/nod-ai/SHARK-Turbine/blob/main/tests/generated/running_tests.md) and shared with me and Avinash yesterday.
Someone on the call today also mentioned pixel_shuffle as an important op to support.
@stellaraccident I'm not sure I've done what you had in mind - I created a new issue by clicking "New issue" (top right of this page, for me at least). I don't seem to have permission to assign myself to it though
You should now have an invite to the organization and I think I added you to a team such that you have write access to the repo.
@stellaraccident I plan to implemente torch.aten.replication_pad2d op, and created the following issue for that. But I cannot link the op to the created issue. Maybe I missed something. https://github.com/nod-ai/SHARK-Turbine/issues/286
Opened a issue tracker for the op torch.aten.diag_embed. @stellaraccident Can you help link it from the list above? https://github.com/nod-ai/SHARK-Turbine/issues/288
@stellaraccident , I had taken up torch.aten.acos (https://github.com/nod-ai/SHARK-Turbine/issues/293). But, @schnkmwt pointed out that https://github.com/frederik-h has taken that up as https://github.com/llvm/torch-mlir/issues/2604 . So, maybe link the op to that issue. I can take up a different one: reflection_pad1d. Please link https://github.com/nod-ai/SHARK-Turbine/issues/293 to that.
@kumardeepakamd I think this op is already being implemented: https://github.com/llvm/torch-mlir/issues/2604
For ALL new contributors, let's use this to track your newly implemented ops. [tracking] TorchToLinalg and ONNX Op Support #215
Close this issue temporarily so we focus on #215 .
Tracking model burndown for Gen-AI models and variants that we seek to be serving via Turbine.
[ ] Dynamic shaped llama2
[ ] SHARK Model Porting
[ ] Priority op requests
[ ] #210
[x] #110 @AmosLewis
[ ] OPS to linalg in llama_test when https://github.com/nod-ai/SHARK-Turbine/pull/212:
[ ] torch.aten.empty_strided
[ ] torch.aten.mean.dim
[ ] torch.aten.expand
[ ] torch.aten._softmax
[ ] torch.aten.silu
[ ] General torch-mlir op support
[ ] ONNX op support https://github.com/nod-ai/SHARK-Turbine/issues/215