Open AlexandreEichenberger opened 2 years ago
Working on compress
Working on NonMaxSuppression
I am working on SpaceToDepth
(#926) and DepthToSpace
(#927).
FYI, here are some of the benchmarks we are focusing on and that have ops that are not working yet.
high priority: (from model zoo)
high priority: support compile models compiled to their lowest component (like RNNs not exported as high level ONNX ops). No crash.
medium prio: hugging face GBERTQnA
A list of ops currently not supported and present in Model Zoo are listed at the end of this issue #128
I am going to look at categorymapper
(#941).
working on one hot to work with multiple types
working on Hardmax to support Bidaf. PR #950 (merged).
working on Resize.
Working on IsNaN op
Working on ScatterElements
(needed by fasterrcnn-10.onnx, maskrcnn-10.onnx).
PR is https://github.com/onnx/onnx-mlir/pull/1352
Scatter
is deprecated but we map it to ScatterElements
. PR is https://github.com/onnx/onnx-mlir/pull/1337
Working on ScatterND
. PR is https://github.com/onnx/onnx-mlir/pull/1370
Implemented GatherElements
. PR is https://github.com/onnx/onnx-mlir/pull/1375.
Working on GahterND
. PR is https://github.com/onnx/onnx-mlir/pull/1382.
Status of implemented ops are listed here now: https://github.com/onnx/onnx-mlir/blob/main/docs/SupportedONNXOps-cpu.md
Hi, thank you for your excellent work!
I am quite new to MLIR so the questions may be stupid, please never mind. I see ArgMax
is supported in onnx-mlir but ArgMin
not, is there any special issue for ArgMin
? If not, can I open a PR about ArgMin
just based on ArgMax
with little modification?
@airMeng please go ahead with a PR for ArgMin
. Thank you!
Hi, can I work on celu op ?
Somebody please support QuantizeLinear/DequantizeLinear ops for quantized networks.
is anyone working on extending the decompose-onnx
pass (or a similar pass) to support more onnx.Custom
ops? In particular i am trying to compile a basic backwards graphs and get InPlaceAccumulatorV2
custom op after converting onnx to mlir with onnx-mlir --EmitONNXIR
.
@srcarroll CustomOps is a non-standard op that @chentong319 added to easily convert a custom op into a function call. While Tong knows best, my recollection (and I may be wrong here) is that custom ops are mainly generated within onnx-mlir, and not parsed in from an ONNX protobuf.
Tong is away for a bit, if you wanted to add support for more custom ops, we would certainly be interested in taking in the changes. There are also ONNX functions, maybe that may help too.
@AlexandreEichenberger thanks for the response. i'd be happy add support, but I can't find any info on the definition of InPlaceAccumulatorV2
. do you know where i can find that?
could you also point me to the ONNX functions you are referring to and how to emit them? thanks
I could not find info about your "InPlaceAccumulatorV2", it does not appears in the ONNX specs or what I could find about the OnnxRuntimeExtensions, though I may have overlooked something in the ORT as I am not very familiar with it. How did you create the ONNX graph?
As far as creating custom ONNX functions, you can see a reference here in the ORT literature as a preferred way to make new custom ops: https://onnxruntime.ai/docs/reference/operators/add-custom-op.html
The ONNX specs also have a section on ONNX functions: https://onnx.ai/onnx/intro/concepts.html#functions
Idea: put here a quick comment to claim the operations that you care currently working on, so that we do not replicate work. Can also add a request for new op.