Open natevw opened 10 years ago
Ah, just found https://github.com/signalspec/signalspec and that is maybe the most up-to-date version/replacement of Components?
SignalSpec isn't as relevant to Component design as this Components doc. SignalSpec is a language useful for defining communication with hardware peripherals.
A lot of the ideas from Components are taken from Signalspec, and they're designed to interoperate, but they're separate projects.
Signalspec is a project I've been doing on-and-off for a couple years now, originally for a forthcoming logic-analyzer-like device at Nonolith Labs, and has since expanded to cover both generating and parsing digital, RF, and analog signals. It's an entirely new DSL for modelling protocols using regex-like descriptions of state machines, while Components binds similar concepts to C, Rust, JS, Lua, etc. Signalspec is not (designed to be) Turing complete, nor can it talk to hardware on a microcontroller on its own. You might use a C component for LPC1830 I2C with a Signalspec description of the MMA84 accelerometer protocol, and then a JS component on top that POSTs to a HTTP endpoint when it detects freefall.
The messaging between components is close to CSP / Actors, but explicitly has hierarchical state, somewhat like Harel / UML Statecharts. For example, SPI bytes can only occur within a transaction, and transactions can only occur within an instance of the SPI component (whose existence behaves like an action, because it can have a start and an end, and has actions within it). Whereas in CSP, the event trace would be flat, here events are nested. In Signalspec-like notation: SPI { transaction { byte(0x12, 0x34); byte(0x56, 0x78); } transaction { byte(0x55, 0xAA); } }
.
The issues you point out with timing and error handling are ones I've been thinking about as well.
One way it differs from Node and some actor systems is that messages aren't concurrent. It's a state machine, with each component only in one state at a time. In the JS bindings, this probably means queuing the commands, which makes your example a little more reasonable.
Signalspec handles timing by making the samples of the signal an explicit part of the language, like regexps do characters. If you want the wire high for two samples, you write two high samples: #h; #h
, or use a repeat
to match or generate a particular number or range of them. (There'll probably be time(1s, #h)
sugar to use the known sample rate to generate the appropriate repeat count). That works on logic analyzers and FPGAs, but on the other hand, microcontrollers don't have a "sample rate" short of the clock speed, and I'm not entirely sure what the best GPIO abstraction is for Components. The approach I've taken so far gives you high
and low
as states/actions that can be invoked by other components without any explicit timing semantics. The component manipulating the GPIO pin would need to use its own timer (which could be a virtual timer component using a queue to multiplex over a hardware timer), and trigger the events only on timer completion. Combined with "wait at least" semantics like setTimeout
, this works for most of the things we use GPIOs for in a microcontroller: blink an LED, give a chip 1ms to wake up before talking to it, etc. Input is a little tougher, because it needs to distinguish between "what is the pin now?", and "tell me when it changes", and if remote, probably avoid trying to send every single transition.
Error handling is always the troublesome part of things that claim to be network transparent. I don't have a complete solution, but there are a few mitigating factors. Embedded systems have less that can go wrong than in a typical server app, and in general, many failure modes are simply not recoverable. If a peripheral fails to initialize, that's a hardware problem (but you may want to keep other functionality working, or tell the network that the device is dead). Similarly with network connectivity -- If the primary purpose of a device is to publish sensor data to the cloud and your network fails, there's not much to do besides wait for the network to come back up. Over USB, it's even less of an issue, because if that fails, the device probably loses power too.
There are some complications around actuators, e.g. if your messages are "open water valve [wait 30s] close water valve", and the network fails in the middle, that could be bad, but it would be better to put the timing in a component on the MCU that could receive a "open water valve for 30s" command and use a local timer. There are certainly cases that require more complex error handling, though.
We avoid the issues regarding primitives from your link by simply not including dynamically-sized data structures in the protocol, making everything a series of events. While "full network transparency" would be nice, I'd settle for "mostly network transparent, good enough for prototyping", and breaking out the network failure modes into APIs that explicitly include them, as "automatic network binding generation" for production use. The former would be great for cases like prototyping in JS for a Cortex M0 that couldn't fit a JS runtime, where the JS code runs on the PC, and yes, if you pull the USB cable, things will break. The network binding generation isn't even necessarily a new protocol, but could be CoAP, MQTT, REST, etc.
As for performance, my hope is to make as much of this boil away in Link Time Optimization as possible. Typical microcontroller programs can get inlined very aggressively, because recursion is rare. I've seen microcontroller firmware in which the only functions after optimization are main()
and a few interrupt vectors, and that's not completely out of reach here. Even if you call back and forth between a few components in the process of handling an interrupt, LLVM can inline the tail calls into a linear code sequence, at least until you hit JS. (If you represent states of a state machine as function pointers, then you get more functions, but that's almost just an optimization of using integer states, and letting the compiler generate a jump table for a switch(){}
. TBD if that's actually an optimization after the amount of intraprocedure optimization it prevents).
Both the begin
and end
parts of an action are optional. In cases like GPIO, where there's no end
emitted, and nothing waiting for it, there's no code generated.
ARM Cortex-M's NVIC has a cool feature called sleep-on-exit, in which the processor only runs interrupts and automatically goes to sleep when no interrupts are pending. Because there's no thread being interrupted, it doesn't need to spend time stacking and unstacking registers, so interrupts are really cheap, and it basically becomes a purely event-driven processor. It would sometimes have to drop into Thread mode for the C or Rust components that might want a stack like a traditional thread, and maybe heavyweight things like the Lua runtime, but you could get pretty far just with interrupts.
How high up the stack are you intending this to go? How would, say, these non-sugar parts of the current SD Card driver implementation be encapsulated into a corresponding component?
I think this is an especially interesting situation because there's coordination required between both a GPIO pin and the SPI bus, and lots of silly signal timing stuff that could be optimized to, say, share the SPI bus more eagerly but only if certain preconditions were met, etc. (Not to mention, a really far-out case, where your trace routing and processor features could support e.g. veeeery careful periodic tying of the input line of an audio chip to the output line of the SD Card or other DMA-type tricks.)
Then what about even higher-level things like http and fs and whatnot? Would those be components too? (Maybe this is getting into #6 territory…)
Are Components an IDL for something of an Actor model or Communicating Sequential Processes?
If so, I wonder if this is a confusing way to start thinking about the situation:
Seems like what you're saying is "imagine writing a generic RPC wrapper around the node.js model" — objects that have a lot of async methods
obj.makeMeASandwich(function (refusal, food) {})
and emit eventsobj.on('ranOutOfBaloney', …)
.I don't feel this is a good foundation to build on. It is not a satisfying model — perhaps practices like de-Zalgo were the first canary of this. Object-oriented programming has this "encapsulated state" fetish, and slathering a layer of "you may expect a response to this FOIA request at Some Later Date™" on top gets messy. (In practice, I don't think node.js suffers a lot from this because objects are generally stream-oriented, not heavily shared, and events kind of happen to get fired in some sort of sane fashion.)
Imagine a object that represents some remote state, in node.js. Say a it's database or to make it even more a propos, a GPIO pin that's on the other side of a network. Your interface is something like:
remotePin.set(bool, cb)
— sets the pin value tobool
, callscb(err)
after something goes wrong or when it is soremotePin.get(cb)
— callscb(err, bool)
after something goes wrong or we have the value locallyremotePin.on('change', …)
— emitted whenever the value changesSeems simple enough, eh? Now what is the output of:
Is (the local implementation of) the remotePin interface allowed to optimize its communications, batching up the two sets and sending only one? In this case, the two sets are in the same tick of the runloop, seems fair enough. Maybe the pin communications actually happens over some sort of other polling cycle, every 25ms you always exchange what the value is and what it should be…do we emit 'change' event based on local toggles or on the remote values seen? What if something actually needs to toggle on and then off, I guess it needs to wait for the callback of the first of course, but what is the result of a get started sometime after the first set? Depending on the communications channel, it might make sense to have the 'change' events fired based the local calls to set method, or maybe to have the remote emit them after it sees the set, or maybe have the local emit them when it sees a value changed…
I don't know if this is the best example to explain with, but hopefully exposes some of the questions that are left unanswered by the model of "weakly ordered OOP".
And ugh am I tempted to just delete this whole post, because I don't really have a counterexample of "look how clean it is if you specify your contract solely in terms as typed messages" or whatnot.
Also, I'm struck by the fact that the "best" way to turn a pin on locally looks like:
Whereas in some sort of "mailboxy communicating actor" Go/Erlang/Rust thing it might look like:
Seems you have to optimize for one or the other (sync/cheap/reliable vs. async/expensive/unreliable). If you try to make remote look local you end up with e.g. these pros/cons. If you force yourself to always treat local as remote, you end up with inefficient callback hell type stuff.
To try solve this across all sorts of languages with all sorts of concurrency solutions…that seems really challenging…
Do it :+1: