Closed inexorabletash closed 8 months ago
I don't know the context around init-steps. Are there plans to add them? Or was it an idea that turned out not to be needed?
Either way - can either merge this and follow up, or I can revise - either way, this is not blocking anything else.
@inexorabletash
Are there plans to add them? Or was it an idea that turned out not to be needed?
I suppose the latter. I dig the commits, and found it was introduced by @zolkis in https://github.com/webmachinelearning/webnn/commit/30ba12ece0937263df81c1dd90f06214bdfbf802. I try to recall and it seems to be intended to run the activation initialization steps supplied by different activation creation caller. But it turns out there are no callers using it. @zolkis , please correct me if I am wrong.
Either way - can either merge this and follow up, or I can revise - either way, this is not blocking anything else.
This PR proposes to remove the "re-throw" because "create an MLActivation" steps don't throw. I think we may need to ensure all "create an MLActivation" steps won't throw. So, if "init-steps" is useful and may throw, the "re-throw" may need tol be kept. Otherwise, the "init-steps" should be removed and that makes "create an MLActivation" steps don't throw at all, then the "re-throw" could be removed. WDYT?
This PR proposes to remove the "re-throw" because "create an MLActivation" steps don't throw. I think we may need to ensure all "create an MLActivation" steps won't throw. So, if "init-steps" is useful and may throw, the "re-throw" may need tol be kept. Otherwise, the "init-steps" should be removed and that makes "create an MLActivation" steps don't throw at all, then the "re-throw" could be removed. WDYT?
Makes sense. I removed the steps in c6ee63af45c248942129279ed97de1d0a4878f6d, but if we decide to keep them or re-add them then we can abandon this PR and do one instead that ensures all invocations re-throw, and make it explicit that init-steps themselves may throw and that methods that invoke MLActivation creation should re-throw.
Back then we wanted to keep optional init steps around until we figured how we end up specifying other builder algorithms. Since now the usage has became more clear, it's indeed time to simplify and LGTM.
The "create an MLActivation" steps don't throw, so there's no need to re-throw.
Also, nothing specifies init-steps so drop those, and simplify how the steps handle options, to align with how things are done for
MLOperands
.Discussed in https://github.com/webmachinelearning/webnn/pull/591#issuecomment-1970013201
Preview | Diff