lichen-community-systems / Flocking

Flocking - Creative audio synthesis for the Web
GNU General Public License v2.0
692 stars 60 forks source link

Add support for OpenSoundControl to Flocking #48

Open colinbdclark opened 11 years ago

colinbdclark commented 11 years ago

This should, at first, be implemented for node-flocking. Basic OSC support should include:

  1. The ability to both send and receive OSC messages within a Node.js server
  2. The ability to automatically transform OSC messages to Flocking messages (e.g. calls to get()/set())

This will eventually also require improvements to Flocking in order to support a more declarative means for creating and naming synths within the environment.

From there, we can expand it to include client-side support via a WebSockets-based OSC proxy between a Node.js server and a browser.

MylesBorins commented 11 years ago

I think it is safe to assume we will need to rely on node.js for this. With that being said, the majority of functionality for osc is available out of the box with node-osc. It is fairly trivial to make an osc server or client to send / receive messages. As such, would it make sense to roll it directly in to Flocking? Should this basic functionality exist more as a demo in the language showing how the set-up can be implemented?

We discussed the possibility of doing code generation based off a model... having a way to represent the osc connections in json.

interface = {
    slider1: {
        "bind": that.polysynth.setsomething,
        "default", 25,
        "min": 0,
        "max": 127,
        "scale": "linear"
    }
}

This is somewhat similar to the way in which interfaces / osc is generated for faust... an example of which you can see here

https://code.google.com/p/faust-lv2/source/browse/effects/flanger.dsp?r=c21aa977815c1a51c0599157c797c50174d70b2a&spec=svnaf0292aff58f332ef1d07c2957f0dd015fdbd5a6

colinbdclark commented 11 years ago

Hey,

Awesome, I'm glad you're starting to think through this. There's a lot here; let's try to break it down and see what we come up with.

It might not be quite as trivial as you might imagine to integrate OSC with Flocking, but you're right that it isn't rocket science. I guess the first step is to pick a library that enables both sending and receiving of OSC from within a Node.js instance.

But once you've got that working, the first question, which I don't fully have the answer to, is "what should someone actually be able to do in Flocking via an OSC message?" We'll want to create a binding between OSC messages and actions within Flocking. From what I know of OSC semantics--I'm still reading about it--you represent actions based on a hierarchy of containers and methods. SuperCollider, for example, seems to maintain a flat hierarchy and prefixes specific methods with their target (i.e. "/s_new" to create a new synth) or "/n_free" to free a node).

Off the top of my head, here are a few things I can imagine someone would want do via OSC in Flocking:

  1. Create a new, named synth (the argument to the message would be the synthdef JSON
  2. Set and get input values on a named synth (two arguments: the synth's name and the payload to get/set)
  3. Send OSC messages to another process (e.g. SuperCollider or Max) via standard Flocking semantics (i.e. get and set), transforming the results into whatever native messages that software requires (via a set of pluggable JSON->OSC translators)

If you've got the time, it wouldn't hurt to compile a small list of the actions supported by a few software packages, just so we can compare. I'm also quite interested in how the Monome's OSC binding support works.

I like your idea for declarative user interfaces, and I think your sketch JSON looks like a good start. Can you elaborate, though, a bit more on what you think the relationship between declarative user interfaces and OSC support is in your mind?

As I think aloud here, it's looking pretty clear that to do this we'll need support in the Flocking environment for giving synths unique names so that they can be addressed via OSC and your declarative UI structure. This is discussed briefly in gh-27, and I will aim to add that feature ASAP.

MylesBorins commented 11 years ago

So as for a library, we could use thealphanerd/node-osc, an npm tracked osc library that I maintain. It gives you the majority of features offered by other languages (such as pd / max or supercollider), and if we find there are features we don't have... well I'll write em. Anyways, node-osc gives the ability to make a server / client, and well as package and send messages in segments or in one shot.

From my experience with OSC... the best way to think about it is as a combination of an address and a value. A simple message could be thought of as

/this/is/an/address iifss 20 20 2.2 foo bar

In node-osc (as with max or pd) the second part of this message format (the value types) is omitted and automagically generated for you. So a simple setup of a client (defining an ip to send to and a port to send to) would be represented as

var osc = require('node-osc');

var client = new osc.Client('127.0.0.1', 3333);
client.send('/oscAddress', 1, 1, 2, 3, 5, 8);

Setting up a server to receive message and setup a call back would look like this:

var osc = require('node-osc');

var oscServer = new osc.Server(3333, '0.0.0.0');

oscServer.on("message", function (msg, rinfo) {
console.log("Message:");
console.log(msg);
});

Each message is an array, the first item of which is the address, the rest of the array is populated with the values.

This library could potentially be improved. It has a couple dependencies to handle the lower level encoding (specifically to encode the message as a udp binary). As well it was written quite some time ago the code could probably be refactored / improved.

So with making a server and a client already taken care of, we need to start thinking about what functionality we would like to control using OSC. The idea of creating / destroying synth defs via osc had not occured to me at first, although I think that is a very interesting idea... and I think that the model of how the monome operates would be a great model to examine.

the monome has a main server that is always running at a specified port (or a bonjour service). This server can be polled with a specific address, and then responds with the current state of the system (list of monomes / ports). As for the devices, they start their own server and listen on a specific port, and send on a different port. They have messages for updating the led's, and simple messages they send when interacted with. As well, there are a handful of messages that can be sent to receive state information.

My idea regarding an "interface" file hinges on the concept that any synth def will have a limited number of "connections" that will need to be made. By representing it with a model, we save the user from having to write extensive bindings / callbacks, which can be both messy and frustrating.

I had the pleasure of talking with Yan over at Grame this week, and he was very interested in trying to make a consistent json model to represent interfaces between faust and flocking. This would allow for some very interesting collaboration / interaction between the two languages. He also informed me that faust has a javascript compiler, allowing you to compile any faust code into web audio api js... but that is a discussion for another thread.

SO, with that being said... I think the only thing missing with the above representation is the idea of using osc to control other platforms... which I think is a difficult problem. OSC is not exactly a standard in the sense that every platform will use the exact same message structure. In order to get the kind of functionality you are suggesting we would need to do one of two things

1) Create a standard set of message types that can be replicated in different languages

2) Make custom bindings for every project in every language

Obviously the former is the only option. This is also something we get for free if you are generating code via models, and collaborate with faust to have a consistency between flocking and faust in model representation and osc implementation. This is especially true considering faust can compile to a variety of platforms with osc support baked in.

What type of use cases were you thinking of where by flocking semantics would control other environments? Would this be more creating callbacks that could control other environments? If that is the case, it shouldn't be too hard to interpret the "interface" model in reverse. One way of interpretation it could create a server that listens for the messages to control the synth, the other interpretation could create a client that sends messages whenever those same elements are modified. Those messages should be absolutely identical. It may turn out that in order to leverage those messages in another environment individuals will need to create their own means of parsing them... although that is not to say that the parsing code itself could not be generated by faust.

Thoughts?

drart commented 11 years ago

Hey Guys,

Check this out:

http://www.eecs.umich.edu/nime2012/Proceedings/papers/299_Final_Manuscript.pdf

This is a paper that has an idea and some implementation to make a standard way of OSC identities to register and exchange messages with each other. The problem with making a standard set of messages is that OSC devolves back to MIDI.

I can see a main use being running flocking headless on a raspberry pi and controlling it from some other source. Colin and I had talked about that possibility. In that case there would need to be some sort of control protocol.

a

On Sat, Feb 23, 2013 at 12:54 PM, Myles Borins notifications@github.comwrote:

So as for a library, we could use thealphanerd/node-oschttps://github.com/thealphanerd/node-osc, an npm tracked osc library that I maintain. It gives you the majority of features offered by other languages (such as pd / max or supercollider), and if we find there are features we don't have... well I'll write em. Anyways, node-osc gives the ability to make a server / client, and well as package and send messages in segments or in one shot.

From my experience with OSC... the best way to think about it is as a combination of an address and a value. A simple message could be thought of as

/this/is/an/address iifss 20 20 2.2 foo bar

In node-osc (as with max or pd) the second part of this message format (the value types) is omitted and automagically generated for you. So a simple setup of a client (defining an ip to send to and a port to send to) would be represented as

var osc = require('node-osc'); var client = new osc.Client('127.0.0.1', 3333);client.send('/oscAddress', 1, 1, 2, 3, 5, 8);

Setting up a server to receive message and setup a call back would look like this:

var osc = require('node-osc'); var oscServer = new osc.Server(3333, '0.0.0.0'); oscServer.on("message", function (msg, rinfo) {console.log("Message:");console.log(msg);});

Each message is an array, the first item of which is the address, the rest of the array is populated with the values.

This library could potentially be improved. It has a couple dependencies to handle the lower level encoding (specifically to encode the message as a udp binary). As well it was written quite some time ago the code could probably be refactored / improved.

So with making a server and a client already taken care of, we need to start thinking about what functionality we would like to control using OSC. The idea of creating / destroying synth defs via osc had not occured to me at first, although I think that is a very interesting idea... and I think that the model of how the monome operates would be a great model to examine.

the monome has a main server that is always running at a specified port (or a bonjour service). This server can be polled with a specific address, and then responds with the current state of the system (list of monomes / ports). As for the devices, they start their own server and listen on a specific port, and send on a different port. They have messages for updating the led's, and simple messages they send when interacted with. As well, there are a handful of messages that can be sent to receive state information.

My idea regarding an "interface" file hinges on the concept that any synth def will have a limited number of "connections" that will need to be made. By representing it with a model, we save the user from having to write extensive bindings / callbacks, which can be both messy and frustrating.

I had the pleasure of talking with Yan over at Grame this week, and he was very interested in trying to make a consistent json model to represent interfaces between faust and flocking. This would allow for some very interesting collaboration / interaction between the two languages. He also informed me that faust has a javascript compiler, allowing you to compile any faust code into web audio api js... but that is a discussion for another thread.

SO, with that being said... I think the only thing missing with the above representation is the idea of using osc to control other platforms... which I think is a difficult problem. OSC is not exactly a standard in the sense that every platform will use the exact same message structure. In order to get the kind of functionality you are suggesting we would need to do one of two things

1) Create a standard set of message types that can be replicated in different languages

2) Make custom bindings for every project in every language

Obviously the former is the only option. This is also something we get for free if you are generating code via models, and collaborate with faust to have a consistency between flocking and faust in model representation and osc implementation. This is especially true considering faust can compile to a variety of platforms with osc support baked in.

What type of use cases were you thinking of where by flocking semantics would control other environments? Would this be more creating callbacks that could control other environments? If that is the case, it shouldn't be too hard to interpret the "interface" model in reverse. One way of interpretation it could create a server that listens for the messages to control the synth, the other interpretation could create a client that sends messages whenever those same elements are modified. Those messages should be absolutely identical. It may turn out that in order to leverage those messages in another environment individuals will need to create their own means of parsing them... although that is not to say that the parsing code itself could not be generated by faust.

Thoughts?

— Reply to this email directly or view it on GitHubhttps://github.com/colinbdclark/Flocking/issues/48#issuecomment-13994241.

colinbdclark commented 11 years ago

@drart, thanks for the article. This looks really helpful. I think your point about the fact that generalized, cross-application messaging tends to devolve back into MIDI is an important one. My sense is that, philosophically, any attempt to devise a standard taxonomy of messages across all applications will fail due to difference and aversion to change--different synthesis environments genuinely have unique designs, priorities, and approaches and thus different messaging styles. Instead, we should take an approach similar to the one we use in Infusion: support transformation to and from different messaging formats. It's a bit like the difference between translating between two languages and using Esperanto, if that makes any sense.

My personal preference would be to try to model OSC messaging in Flocking as closely to the semantics of REST as possible--the universal actions of get, set, create and delete. Coupled with a open, extensible address space, we should have sufficient tools to model most synthesis actions in Flocking.

James McCartney has a nice slide deck on the design of OSC support in SuperCollider where he describes his motivation for not taking this hierarchical approach, but I suspect that it can be implemented performantly with a bit a cleverness. Most importantly, it will provide an approach that is consistent with real-world web idioms such as REST and also the Infusion ChangeApplier idiom. For example, we'd get HTTP-based messaging (i.e. via AJAX from a web-based UI) essentially for free, which would be very useful for simple applications.

With this approach, I imagine the main job of integrating OSC with Flocking will be to define "message transformations" that will convert incoming OSC messages to Flocking method calls (and eventually, vice versa). So, given an incoming Flocking message such as, say, this pseudo-message:

"/enviro/synths/myAwesomeSynth/set", ["carrier.freq", 440, "mod.mul", 1.0]

We would transform this to an analogous set() call on a synth named myAwesomeSynth like this:

flock.enviro.shared.set("myAwesomeSynth", { "carrier.freq": 440, "mod.mul": 1.0 });

You can imagine that there would be a similar outbound process for, say, controlling a synth running on the SuperCollider server using Flocking's API. Given a call like this:

var superColliderSynth = flock.synth.sc3Proxy("synthName", server); superColliderSynth.set({ "carrier.freq": 440, "mod.mul": 1.0 });

We would transform these actions into the appropriate SuperCollider-compatible OSC messages (in pseudo form again here):

"/s_new", "synthName" // Let's say this is node 200 running on the SC server. "/n_set" [200, "carrierFreq", 440, "modMul", 1.0]

To start, this could be factored as a simple MessageTransformer object with pluggable "incoming" and "outbound" strategies defined as functions.

colinbdclark commented 11 years ago

@TheAlphaNerd, I was also thinking that it might be helpful to think through some of your ideas for model-based bindings between UI controls and synths using some specific, concrete examples.

Can you create a couple of example Flocking synths (or borrow a few from the demo playground) and sketch out some user interfaces for them, showing what the JSON UI model that binds the two of them might look like, specifically? That should help make the discussion more tangible and allow us to see how this intersects with more general OSC support, too.

colinbdclark commented 10 years ago

Preliminary support for OSC within Flocking applications running on Node.js is now available here:

https://github.com/colinbdclark/flocking-osc

This currently only supports OSC over serial, but additional transports will be added (including UDP, TCP, and especially Web Sockets) in the future.

This also includes an early version of the "synth input mapper," which is responsible for performing arbitrary transformations between OSC addresses (or, properly, any path-like form) and Synth inputs. This includes the ability to define ugen-level scaling and transformation of incoming values. This will be ultimately harmonized with Infusion's new Model Relay features.

An example of how flocking-osc can be used is available here:

https://github.com/colinbdclark/flocking-osc-fm-synth/blob/master/lib/app.js

In particular, an example of how to configure the input mapper is located here:

https://github.com/colinbdclark/flocking-osc-fm-synth/blob/master/lib/app.js