LabVIEW-DCAF / StandardEngine

The Standard Execution Engine plugin for DCAF.
http://forums.ni.com/t5/Distributed-Control-Automation/Standard-Engine-Documentation/gpm-p/3539201
7 stars 3 forks source link

replace main while loop with event loop #38

Closed smithed closed 9 years ago

smithed commented 9 years ago

potentially not feasible, but it could make some things really convenient. Since event structures have priority now, we can use a high priority user event as the main "timing pulse" and it could come from any source we specify, just as it does now (or even an external source) but it could also accept other (lower priority) configuration events. For example the idea of throwing objects to a parallel thread for reinitializing when they fail would be a paiiiin to do in our current implementation -- we'd have yet another queue, yet another thing we have to check if empty, etc. But if we had a low priority event, oh hey we have dataflow and an async event. This could also be used for state transitions (abort would be high priority as well, and could potentially be thrown by anyone).

Assuming determinism checks out this could be basically free.

Beazurt commented 9 years ago

Interesting idea. How does the priority of events work? I'm guessing the handler just handles the highest priority first if there are multiple in the buffer? If so, how would we handle reinitializing something? Wouldn't we have to still handle that code in a different loop so that a long initialization doesn't interrupt the periodic execution of the loop? This could definitely be an interesting thing to prototype at some point.

smithed commented 9 years ago

Yeah so when you register for events (either dynamic or static) its (as I understand it) just basically making a funnel. We take 7 different event queues and combine them down into one queue which is fed to the event structure sequentially. So when you throw a high priority user event it goes in front of every normal priority event in the main queue.

We would still do the initialization in a different loop, but we'd be able to get information back to the control loop much more easily. To illustrate: -In the current situation, we would have to create a new queue for getting our "reinitialized objects" back from the coprocessor which is doing the reinitializing. We'd have to check that queue every time (or at least when we know we have something outstanding). This is in addition to the many other queue checks we have. -If we had an event loop, the structure would automatically wait for any high priority events (fire timing source, abort/estop) and then handle other incoming events. We'd still have the coprocessor so the event would just be "stick this on the run queue in this location", which would be very very fast (probably faster than even checking the queue in our current situation).

Other cool stuff this would give us: -Type-safe messages, as needed, to the engine. IE whenever we need to add a new message we can just make a new event. The structure remains the same. -One such message could be forcing of parameters. Rather than checking our theoretical message queue every time like we were discussing back in november, we could force parameters just by adding a user event.

smithed commented 9 years ago

But yes, next step is prototype and determinism check. I think it could be really sweet if it works, but if the lv event system is too nondeterministic then...

Beazurt commented 9 years ago

That does sound incredibly promising. If a user doesn't want to get interrupted by any of the features that we provide, we could also just give them a mechanism to unsubscribe to that event/msg. It would be a pretty big change to the internals of the engine, but I don't think it would necessarily need to break anything else which is also a plus.

smithed commented 9 years ago

Yes, unsubscription is awesome and I don't even think it would be that bad from a code standpoint. Most of our stuff is already a synchronous semi-event-based loop. We're just talking about adding (a) new features that would benefit (b) an event for stopping/changing state and (c) a new loop to handle the timing source. Probably like 2-4 hours once we've validated performance.

smithed commented 9 years ago

Note to self: Since events have unbounded size we need to manage the timing event in situations where we finish late

smithed commented 9 years ago

Determinism seems to be OK. About 50 usec for a 1000 usec tick on a 9068 which is about as good as we could get otherwise.

To my last point -- just keep the timer thread at slightly lower priority than the UI handler. We can also do the reverse and use a flush function, but it seems simpler to say that the main loop needs to run to completion before we check timing again.

smithed commented 9 years ago

ok, actually, stuff gets weird when you add in other loops (to utilize the CPU). Not sure whats happening anymore.

Beazurt commented 9 years ago

Hmm, is the main event loop running at a higher priority than the other loops? Also how does this affect our timing source implementation. Would we want to base it off of user events, or convert a timing source as implemented now into the firing of a user event?

smithed commented 9 years ago

i just converting timing as-is into a loop and I think thats probably right. Would technically use less CPU to make the timing source into its own loop, but meh.

I had 3 loops. priority 65000 was the main loop, 64999 is the timer loop, 1 is the cpu loader. Seems to have weird delays.

smithed commented 9 years ago

not deterministic, closing