Open michaelpalumbo opened 5 years ago
Creating a blinking LED would be more of how the data will come in and how it would pulse. From the Javascript side it really only would be any shape (preferably something that looks like a light) and just have the emission or material color change based on values coming. Should be pretty simple to implement.
That's great to hear. We'll need to set up this secondary communication channel between the server and the client (as it will be useful for other things as well).
What if what the client received was an update on the blink rate value and the length of the blink (aka pulse-width). Pros and cons: Pro:
Con:
I see three milestones for the LED design:
I'd vote for getting the simplest case working first: the LED displays the last value it received, and this value is clamped in the range -1..1, and either it displays absolute value or it shows a different colour for negative signals.
I don't think we need to send rate/pulse-width data packets just now. The socket latency on a localhost should be low enough that just displaying the most recent value would work well enough for triggers and LFOs, and these are about the only ones where the representation can be fully unambiguous. A LED in VR can flicker at most at around 45hz (because Nyquist), meaning it can only really show sub-audio waveforms (and special cases like white noise and DC, which happen to look the same at all frequencies).
For other kinds of signal there are a lot of things we can do, but it is a hard problem -- part of the general difficulty of representing the important features of high-resolution data (audio signals) with a low-resolution output (visuals). We could show analysis of higher frequencies (like average envelope, spectral centre, etc.) as these analyses are also low frequency. But that might get expensive and I'd rather save it for later. Even then, the problem is hard, as you don't know in advance what kind of signal you might be receiving, and thus what kinds of features make sense. It might be a unipolar signal (in which case, envelope tracking the average value might make sense) or it might be bipolar (in which case, the average will not represent it well, but RMS average would, so long as the signal has no DC offset... and so long as representing envelope is more important than representing the sign). It might have hard edges like a square (in which case averages might wrongly soften the edges). Also, it might be oscillating between +4 and +5, in which case clamping at +/-1 makes no sense. We dealt with some of this in working out the streamline visualisations in Gibberwocky, which is a very similar problem, and we solved it by adapting the range to the signal. We show a kind of rolling oscilloscope of the most recent N values (a hundred or so) for any signals generated. We track the average min & max value of a signal to work out the vertical range of the streamline graph, and extend the graph if the signal goes beyond those boundaries, and periodically shrink it if it has stayed in a smaller range for a while. (The values themselves are sent as a single snapshot of all active signals every 33ms, and the renderer takes care of storing them in a list to make the graph.) So I would imagine in the long run that we could do something similar for MSVR: to adapt to the kind of signal received where we can, and show multiple properties (instantaneous value, RMS-smoothed envelope, spectral balance etc.) in mini-oscilloscopes.
For now, Max will need to take a periodic snapshot every 10ms or so of all visualized LEDs, and send them as an array to the renderer. We could use a buffer~ in Max to capture these snapshots out of gen~, just giving each LED a unique index in the buffer~. Then we'd send the buffer~ contents over a websocket directly to the renderer, as a separate channel of communication. (At this stage, I don't think it even needs to go through the server -- we only need to know what the unique indices are.) I believe the [ws] object in the @worldmaking/Max_Worldmaking_Package can send jitter matrices as Float32Array buffers, and we can use [jit.buffer~] to wrap a buffer~. Or I could add support to [ws] for sending a buffer~ directly.
This is an example of the adaptive streamlines (oscilloscopes) in Gibberwocky, FYI.
[x] get the paths of all module outputs.
[x] create lookup table to map output paths to a buffer channel.
[x] script each module outlet to a [poke] gen op. assign a [constant nn] op, where nn is the buffer channel
[ ] ~send an update of the lookup table containing outlet paths and channel assignments to clients each time this table is modified.~ no longer necessary -- paths and values get interleaved and are sent as an array within a single js object.
[x] send paths&values list out a websocket to client. i.e. (messages received from a simple ws client in the browser):
[2019-08-09 15:22:19.675] {
"visualFeedback" : [ "lfo_1.sine", -0.511124014854431, "lfo_1.phasor", 0.33568000793457, "lfo_1.pulse", 0.0, "lfo_1.sine_index", 0.33538556098938, "lfo_1.saw", -0.307862997055054, "outs_1.left_(mono)", -0.0, "outs_1.right_(stereo)", 0.0, "outs_1.left", -0.0, "outs_1.right", 0.0 ]
}
[2019-08-09 15:22:20.175] {
"visualFeedback" : [ "lfo_1.sine", -0.511124014854431, "lfo_1.phasor", 0.33568000793457, "lfo_1.pulse", 0.0, "lfo_1.sine_index", 0.33538556098938, "lfo_1.saw", -0.307862997055054, "outs_1.left_(mono)", -0.0, "outs_1.right_(stereo)", 0.0, "outs_1.left", -0.0, "outs_1.right", 0.0 ]
}
[2019-08-09 15:22:20.675] {
"visualFeedback" : [ "lfo_1.sine", -0.511124014854431, "lfo_1.phasor", 0.33568000793457, "lfo_1.pulse", 0.0, "lfo_1.sine_index", 0.33538556098938, "lfo_1.saw", -0.307862997055054, "outs_1.left_(mono)", -0.0, "outs_1.right_(stereo)", 0.0, "outs_1.left", -0.0, "outs_1.right", 0.0 ]
}
[2019-08-09 15:22:21.175] {
"visualFeedback" : [ "lfo_1.sine", -0.511124014854431, "lfo_1.phasor", 0.33568000793457, "lfo_1.pulse", 0.0, "lfo_1.sine_index", 0.33538556098938, "lfo_1.saw", -0.307862997055054, "outs_1.left_(mono)", -0.0, "outs_1.right_(stereo)", 0.0, "outs_1.left", -0.0, "outs_1.right", 0.0 ]
}
[2019-08-09 15:22:21.675] {
"visualFeedback" : [ "lfo_1.sine", -0.511124014854431, "lfo_1.phasor", 0.33568000793457, "lfo_1.pulse", 0.0, "lfo_1.sine_index", 0.33538556098938, "lfo_1.saw", -0.307862997055054, "outs_1.left_(mono)", -0.0, "outs_1.right_(stereo)", 0.0, "outs_1.left", -0.0, "outs_1.right", 0.0 ]
}
[2019-08-09 15:22:22.175] {
"visualFeedback" : [ "lfo_1.sine", -0.511124014854431, "lfo_1.phasor", 0.33568000793457, "lfo_1.pulse", 0.0, "lfo_1.sine_index", 0.33538556098938, "lfo_1.saw", -0.307862997055054, "outs_1.left_(mono)", -0.0, "outs_1.right_(stereo)", 0.0, "outs_1.left", -0.0, "outs_1.right", 0.0 ]
}
[2019-08-09 15:22:22.675] {
"visualFeedback" : [ "lfo_1.sine", -0.511124014854431, "lfo_1.phasor", 0.33568000793457, "lfo_1.pulse", 0.0, "lfo_1.sine_index", 0.33538556098938, "lfo_1.saw", -0.307862997055054, "outs_1.left_(mono)", -0.0, "outs_1.right_(stereo)", 0.0, "outs_1.left", -0.0, "outs_1.right", 0.0 ]
}
[x] need a way to manage the buffer channels: if a module is deleted, either the next newnode gets slotted in (assuming nn channels match), or all module outlets get shifted so that there are never gaps in the channel assignment. this will also require sending an update to the lookup table.
@grrrwaaa I've managed to script poke objects into [gen~ msvr_world] but the jit.buffer object creates a new matrix plane for each buffer channel. This means that with a patch of say 5 nodes, could require between 5-20 channels and as many matrix planes. You could see how this will very quickly get quite expensive on the jitter side. Not sure how to overcome this.
@zodsmar I've merged your work on instancing and my work on the max client into a new branch called develop. The idea here is that we can work on features in their own branches, then merge them into develop for testing. Once verified working, we will periodically merge develop into master. any questions?
From within develop you can now run the max patch to get the data from gen for outlet visualizations. ALSO, and this is important, there were some errors in the scene JSONs that I fixed (particularly scene_rich.json and scene_simple.json)
Receiving data directly from max to the server using port 8084 (Port subject to change). Data comes in -1...1 and is being clamped to 0...255 for RGBA values. Depending on how we do colors and emissions might need to change the clamping values but for now this works. I get path plus value clamped. Just need to put the value to outlets. Will do this once Project Ghost is finished being restructured.
@Zodsmar heres the current spec:
{\"lfo_1__sine\":-0.7605468034744263\,\"lfo_1__phasor\":0.6140456795692444\,\"lfo_1__pulse\":1\,\"lfo_2__sine\":-0.7223623394966125\,\"lfo_2__phasor\":0.6223999857902527\,\"lfo_2__pulse\":1\,\"ffmvco_1__vco_1\":0.3347249925136566\,\"ffmvco_1__vco_2\":-0.3240717351436615\,\"ffmvco_1__master\":0.9353091716766357\,\"vca_1__output\":0\,\"comparator_1__max\":0\,\"comparator_1__min\":0\,\"outs_1__left\":0\,\"outs_1__right\":0}
this is nearly ready: just need to change the outlet colour, since the viz data is ready.
nearly ready.
Priority: pulsing LED. requires solving many problems on the max side and how we channel that data. Not part of OT, a separate channel (doesn't represent change in the definition to the system). like the stderr stream or max console, they aren't part of the max patch, but they are still linked.
possible method is output from gen~: as an [out] or as writing into a [buffer~]?
possible uses:
possible implementation: