Open nopeslide opened 3 years ago
Its a good Idea to support threading or something else to support parallel execution. But the difficulty is, that different platforms use different libraries. Not every platform support pthread for example. How can we solve this?
Its a good Idea to support threading or something else to support parallel execution. But the difficulty is, that different platforms use different libraries. Not every platform support pthread for example. How can we solve this?
Why should we solve this? We can provide a reference implementation with pthread and if someone wants to use sth else, he/she has to implement it. How should we know what other people are gonna use? The only thing we can do is to abstract it a little and handle the threading in one place so its is more easy to port.
What I meant is that i hope there will be an option to disable pthreads and run the code in sequence, or something like that. So that this lib is ready to use even when pthread is missing.
Ah sorry, I misunderstood you there.
I would extend the inference
function to take 2 function pointer.
one for issuing a thread and one for waiting for all threads to finish.
so you can pass your own thread implementation. if these are nulled they will be skipped.
@alrevuelta following your emoji reaction, you seem quite happy with this minimal approach? :D
@alrevuelta following your emoji reaction, you seem quite happy with this minimal approach? :D
I haven't looked much into parallelising the operators but as longs as it doesn't increase much the complexity I'm fine with it :D
@alrevuelta another option would be to actually build the dependency graph of the operators, this would also solve our static node_context array problem.
--- a/include/operators/operator.h
+++ b/include/operators/operator.h
@@ -20,6 +20,7 @@ struct node_context {
Onnx__TensorProto **outputs;
operator_executer executer;
void *executer_context;
+ node_context **next;
+ node_context **prev;
+ bool threadsafe;
};
With the
prepare_*
functions we now have the possibility to multi-thread operator execution. I propose a minimal change in thenode_context
to make this possible and would like to discuss this: Instead of making theexecuter_context
inside thenode_context
a list as discussed in #56 I would add a single pointer and flag to thenode_context
:By doing so we can introduce new nodes to the inference which are explicitly marked as threadsafe and may be executed in parallel to the current node. The
prepare_*
functions can inject as many nodes as necessary and may even specify a different executer and context for each new node. When a node is encountered that is not threadsafe, the inference function needs to wait for all previous jobs to finish before executing the next: