webonnx / wonnx

A WebGPU-accelerated ONNX inference run-time written 100% in Rust, ready for native and the web
Other
1.54k stars 54 forks source link

Support for ai.onnx.ml opset #156

Open redseal798 opened 1 year ago

redseal798 commented 1 year ago

Is your feature request related to a problem? Please describe.

Support ONNX ml operators: https://github.com/onnx/onnx/blob/main/docs/Operators-ml.md

Describe the solution you'd like

When using ONNX ML Operators it should work, such as using ONNX models created from sklearn pipelines

Describe alternatives you've considered

NA

Additional context

pixelspark commented 1 year ago

While I think we encourage anyone interested to work on this (and I'd be happy to review PRs), this is not currently in scope unfortunately as it is quite a bit of work.

I'm not sure our architecture fits this either. The ML operators appear to be quite a bit more high-level than the ONNX operators we currently implement, and it may not be feasible or practical to implement them in the current architecture (where operators are 1:1 translated to a GPU shader, which are executed in a predefined order with no room for CPU work in between).

redseal798 commented 1 year ago

Thanks for the feedback!

SludgePhD commented 1 year ago

FWIW tf2onnx seems to add an ai.onnx.ml opset import no matter whether those operators are actually used by the network, I've had to manually patch the generated networks to remove the import to get them to work. It's possible that other frameworks have the same problem.

schell commented 12 months ago

I've had to manually patch the generated networks to remove the import to get them to work.

@SludgePhD what method did you use to do this?

SludgePhD commented 11 months ago

@schell There is no good tooling for this (in fact there is no official tooling for manipulating ONNX files at all as far as I can tell), so I had to write a Python script:

original_model = onnx.load(model_path)

if len(original_model.opset_import) == 2:
    print("opset imports before: ", original_model.opset_import)
    original_model.opset_import.pop()
    print("opset imports after: ", original_model.opset_import)

(it's not very robust)

schell commented 11 months ago

Thank you none the less, @SludgePhD :)

pixelspark commented 11 months ago

I am not sure from the above if wonnx throws an error when just the import is present? If so we can actually make wonnx a bit more lenient and let it ignore the import as long as no ops from that set are used (unknown ops lead to an error anyway even from the 'basic' set of ops).

schell commented 11 months ago

That would be nice @pixelspark :)

schell commented 9 months ago

@pixelspark how would one figure out if any ai.onnx.ml ops are used in the model? I'm currently writing a cli tool that would do this and remove the opset if possible.

schell commented 9 months ago

@pixelspark sorry, nevermind - I figured it out between the wonnx source and protobuf :) 👍

Mek101 commented 6 months ago

I'm currently writing a cli tool that would do this and remove the opset if possible.

Did that end up anywhere?

schell commented 6 months ago

Sorry @Mek101 - no, as it's part of my day job and was just a quick spike I didn't push the code up anywhere. But it did end up "working". The cli tool traverses the graph looking for uses of the opset and if none are found it removes it and saves the graph.