Closed zolkis closed 8 months ago
@zolkis we should focus this issue to updates to the algorithms outside the MLCommandEncoder
interface when possible. That is, prepare the core API to allow for MLCommandEncoder
to be plugged in.
We have identified the WebGPU interoperability as a feature at risk for CR initial publication in https://github.com/webmachinelearning/webnn/issues/240. We agreed this feature needs more implementation experience and the MLCommandEncoder
interface spec (and possibly also its normative WebGPU interface dependencies) need to undergo a round of updates based on that implementation experience. I recommend not to invest too much time in redesigning the MLCommandEncoder
before we have adequate implementation experience.
Same question as https://github.com/webmachinelearning/webnn/issues/264#issuecomment-1910593270
It's my understanding that the MLCommandEncoder
proposal has been superseded by MLBuffer
(https://github.com/webmachinelearning/webnn/issues/482) and we should consider removing it (https://github.com/webmachinelearning/webnn/issues/528).
Can we close this issue?
Per our discussion https://www.w3.org/2024/01/25-webmachinelearning-minutes.html#t09 the group wants to clarify interaction between WebNN and WebGPU timelines as a priority, using MLBuffer as a prototype. That has a separate issue, closing this.
@zolkis, feel free to salvage any bits from this issue as needed.
A few purely API design related comments on MLCommandEncoder. (Context: I am trying to update some of the algorithms, and specify missing ones, and I stumbled upon these questions).
Graph initialization
MLCommandEncoder.initializeGraph()
is synchronous (looks highly parallel, on a different timeline)? If it's indeed synchronous with no side effects, and done exactly once per graph, why not include it in the createCommandEncoder() constructor, or in the current case, the factory method? To clarify this, we need an algorithm, or a note that informally describes graph init (per context type, eventually).initializeGraph()
method takes a graph as an argument. One might think that a single MLCommandEncoder object might be used for handling multiple graphs, but from the text the intent seem to be that the command encoder is meant to cover the execution phases for a single graph. For another graph, one could use a different MLCommandEncoder object (even if in the implementation it is bound to a singleton).Proposal (question): collate the command encoder factory with graph initialization, to bind the MLCommandEncoder to that graph (and context). That might even be an async method.
Command encoder steps
Assuming the change above, the interface becomes:
It is not clear why
dispatch()
andfinish()
need to be separated? I guess it is to provide clarity on when the optional descriptor may be used? We need some examples to cast clarity on this. How are scripts supposed to manage the command encoding? Maydispatch()
be called multiple times beforefinish()
? The intent is not clear, since AFAIK a single dispatch will manage all the parallel operations involved in queuing graph execution. I understand it follows the DirectML API flow, but I challenge that the usage in this API can be simpler, because it is not a generic GPU API, but a specific application of it.In principle, we should encapsulate what we can, and we should expose only the control points for scripts. Actually I even challenge why (in the Web ML spec) is not enough to specify a single async method in MLContext?
The implementation may still follow the workflow of initializing the graph, dispatch and finish - but why expose that to the WebML scripts?
Web ML should also specify how to further use the
GPUCommandBuffer
.In summary, I would appreciate more explanation/examples/script use cases for this (graph init and graph execution on GPU).
I am aware of this relevant comment and the (still open) discussion in #264 as well - somewhat related to this.