h5bp / lazyweb-requests

Get projects and ideas built by the community
https://github.com/h5bp/lazyweb-requests/issues
1.69k stars 85 forks source link

A standardized framework for building and including Web Audio API instrument/effect "patches" #82

Closed mattdiamond closed 5 years ago

mattdiamond commented 12 years ago

Title sums it up... I'm envisioning a javascript plugin that allows the user to easily include different instruments/effects with a consistent API (e.g. Instrument objects with Play methods that take a MIDI note and duration as parameters). These "patches" would be built on top of the Web Audio API.

I'm also thinking that the plugin repo would encourage instrument/effect pull requests from developers, or perhaps create a separate project for gathering such submissions. Basically a way to have the community develop awesome web audio sounds that developers can easily include in their projects.

Thoughts?

paulirish commented 12 years ago

you should @ ping the various folks that currently have audio effect repos on github and get their brainwaves

mattdiamond commented 12 years ago

@jussi-kalliokoski @cwilso @oampo @corbanbrook any thoughts?

oampo commented 12 years ago

A few quick thoughts on this. Firstly, this sounds like a good idea - I have something very similar in Audiolet where you connect up a load of nodes in a reusable way. It's almost identical to the concept of subpatches in Max/PD, Synths in SuperCollider, etc, etc.

If possible I think the API should follow the Web Audio API, so ideally it would be possible to Effect.connect(anotherNode), do Instrument.noteOn(when), and have AudioParams which you can connect other nodes to - basically something which passes the duck test. I think this is what @mattdiamond was getting at here: http://lists.w3.org/Archives/Public/public-audio/2012JulSep/0197.html.

I also think it would be nice to try to set up a (fairly loose) standard for parameter naming and values so as far as possible you can use "patches" as drop-in replacements for each other. From my experience with SuperCollider this is often a bit of a nightmare, so for example some people have pan as -1 (left) to 1 (right), whereas others use 0 (left) to 1 (right), some Synths have a freq, others have a frequency, and so on. Obviously this is not enforceable, but it would be nice to at least have a style guide.

mattdiamond commented 12 years ago

I like the concept of creating duck-typed aggregate nodes containing a subsystem of internal nodes, but I have a feeling that the Web Audio API guys might insist on leaving that kind of functionality to higher-level libraries. I'm inclined to agree, but the problem is that a lot of the Web Audio nodes are already somewhat high-level (like DelayNode, Oscillator, etc.) and combining these elementary nodes with other complex aggregate nodes might be useful. The solution will probably be to create a library that completely abstracts away the low-level node graph from the user, and simply creates wrapper functions for instantiating oscillators, delays, etc. I suppose we already have several high-level audio libraries, though I feel like most of these libraries create their own graph structures/nodes and only use the web audio api for raw output. Is that still how Audiolet works?

I'm also on board with the idea of creating a plug-and-play standard for web audio patches... basically I'm trying to figure out what we need first: the standard, a high-level library, or both.

Edit: I'm assuming when you were discussing duck-typed nodes, you were talking about being able to drop in aggregate nodes alongside native nodes... I was just pointing out that I don't think that's currently possible, and I'm not sure if the spec will move in a direction that allows that.

oampo commented 12 years ago

Yeah, I agree - I'd imagine that the way to go will be to have a library which wraps all of the Web Audio nodes with something which looks pretty much identical, but you can connect aggregate nodes to seamlessly.

Yes, currently Audiolet just uses the Web Audio API for output. When it looks like cross-browser support is on it's way for the Web Audio API I'll almost certainly migrate Audiolet so it works as a kind of wrapper library like you describe. I'd be interested to try migrating a small subsection over now to see how feasible this is - if I get a free couple of days I'll have a proper look at it.

cwilso commented 12 years ago

Re: Matt's duck-typed edit up 2 comments - no, it's not currently possible (although you can do something somewhat similar by exposing .input and .output nodes on an aggregate node (i.e. just have extra gain nodes as connectors).

This ends up being a pretty complex design problem - because it's also related to VST plugin API, with registration etc. I'm definitely interested, but it's still a little way off before I can get to it.

mattdiamond commented 12 years ago

Thanks for the input, guys... I'm wondering if I should try to pull Chris Rogers in on this discussion, but I don't know his Github username (and I think he might be on vacation anyway).

cwilso commented 12 years ago

He's on vacation until next week.

On Jul 27, 2012, at 12:44 PM, Matt Diamond reply@reply.github.com wrote:

Thanks for the input, guys... I'm wondering if I should try to pull Chris Rogers in on this discussion, but I don't know his Github username (and I think he might be on vacation anyway).


Reply to this email directly or view it on GitHub: https://github.com/h5bp/lazyweb-requests/issues/82#issuecomment-7319517

nick-thompson commented 12 years ago

Hey all, new guy here. I wanted to drop my ideas and get involved cause this is right up my alley and very similar to an idea I had not too recently.

With the direction this is headed I think it would be worth it to give a little extra push in the direction of modern DAWs (Digital Audio Workstations) and adopt a structure where you load samples or instruments to their own module, and then route them through a channel. A channel, in this case, would essentially be a list (of arbitrary size) of filter and/or effect modules, the last of which connects out to the main output (context.destination). The idea would be that you can route multiple samples through the same channel to apply the same filter chain with minimal effort. I think this would be pretty extensible as well in that you could define a simple interface for new filter/effect modules that just expose the first and last node in the sequence for connecting to peer modules.

Doing this essentially lays the groundwork for an in-browser DAW which would be crazy. This is a very abstracted idea so there might be smaller details that I'm omitting that would make this difficult... but I wanted to throw the idea out there for discussion. What do you think?

F1LT3R commented 12 years ago

Probably worth writing a format for storring Web Audio graphs and JavaScript code to construct and de-construct them automatically; and declaring a default interface for inputs and outputs as a single unit.

cwilso commented 12 years ago

We have one; it's called Javascript. :)

Sorry, that was glib on purpose. The tough part, though, is that if you start trying to define a declarative format for Web Audio graphs, you'll end up limiting the capabilities of your format, likely to the point that it won't be anywhere near as useful as just providing a JS library to instantiate your graph.

Now, packaging inputs/outputs, I think is a good idea.

automata commented 12 years ago

We are studying ways to use Web Audio API inside @forresto's Meemoo and I'd like to share with you some thoughts that can go along with this "packaging inputs/outputs" idea.

Meemoo uses a standard way to transmit messages through window.postMessage between the iframe nodes that compose what we call a "Meemoo app". In this way we are thinking on how to include an external Web app (like @mattdiamond drone demo) as a module inside a Meemoo app, getting the buffers generated by that and sending them to another module, like a low pass filter, and them to the audio output.

Initially, We imagined a way to intercept buffers inside the external app and sending them through window.postMessage. Maybe it could create some sync problems because each iframe has its own clock, but it is a shot.

At the same time, we are experimenting with MIDI-like messages transmission through window.postMessage. Guys like g200kg is working on WebMIDILink and we are trying to use MIDIMessage objects based on MIDI Bridge to stay closer of the MIDI draft spec from @jussi-kalliokoski and @cwilso. It sounds like a simple way to make MIDI compatible Web apps. Here is an example of two external modules sharing MIDI messages through window.postMessage inside Meemoo. It'd be so interesting to have a standard way to also send and receive audio signals between Web Audio API apps. And maybe JSON representation of audio patches that could be loaded inside a host app (something that remembers me of VST plugins).

Hoping to contribute in this conversation adding those random experiences.

forresto commented 12 years ago

Meemoo's focus is on a visual patching environment for the web. WebAudio synths will be able to trigger animations, all instantly visually-hackable in the browser.

As of now Meemoo modules are all iframes, but in the master branch I'm implementing "native" modules (almost nailed down) which will be able access variables from other modules directly. WebMIDILink messages work fine between iframes, but there is no use trying to pass/sync audio buffers via postMessage, especially with my "native" node work.

I'll be looking at @cwilso 's Web Audio API Playground for inspiration on how to parse my graph format. I could use some help here, so if anybody wants to bounce ideas with us, get in touch.

F1LT3R commented 12 years ago

@cwilso agreed. but i am thinking about how the effects can be maintained in a single library, ie: upload json of your graph to a server, and the effect can be built from the parsed json; in that sense i think a shared definition of the effects niiput and output interfaces would be useful.

forresto commented 12 years ago

Read the OP again, and I think that Meemoo's future plans could fit this really well.

mattdiamond commented 12 years ago

While I like the idea of creating some kind of universal JSON patch format, I'm not sure how well this format would handle the complexities of a multi-layered patch... one that not only contains a number of native nodes, but also aggregate nodes operating via closures and dynamic processes. I can see JSON working well to encapsulate the general schema of a node graph, but as for the inner workings of custom nodes, it might get weird.

Perhaps JSON is up to the task. I'm just worried about the serializability of functions in JS.

Edit: After re-reading some earlier comments, I think I'm basically just reiterating @cwilso's concerns. Might as well just use minified javascript as the "universal format."

Edit 2: Why isn't my Markdown working? Hrm...

yroJJory commented 12 years ago

I wonder if there's any way the in-development iXMF spec might be usable or extended as part of this effort.

iXMF, which is an ongoing project of the Interactive Audio Special Interest Group (IASIG), is being designed as a means of setting up a standardized method of interconnecting interactive audio assets for use in media such as video games.

You can see the current version of the spec here:

http://www.iasig.org/wg/ixwg/

Perhaps the spec could (or already does) include the "patches" @mattdiamond is referring to?

borismus commented 12 years ago

Recently started a perhaps related project called music of touch. https://github.com/borismus/music-of-touch

Idea is that there's a messaging format between musical instrument and synthesizer. The two are strictly decoupled to support the mobile web use-case (while Web Audio isn't widely implemented). I'm using a custom transfer format between instrument and synth, but would love to consolidate around the standard.

Theodeus commented 12 years ago

I'd definitely vote for the duck-thingie, with emulating the behaviour of native Web Audio nodes. I built a a test app about half a year ago where I used that approach, and it just made sense. Consider this my +1.

Here's some code, effects only though, no instruments. https://github.com/Theodeus/ria/blob/master/js/AudioNodes.js

sym3tri commented 12 years ago

I'm thinking something like node.js's npm for audio modules would be amazing. Especially if the "package.json" could store all dependency and routing configuration.

paulirish commented 11 years ago

fairly relevant: http://dashersw.github.com/pedalboard.js/

mattdiamond commented 11 years ago

Very cool! Maybe we can rope @dashersw into this...

dashersw commented 11 years ago

Sure :) pedalboard.js features some abstractions around this topic; mimicking the real world guitar effects approaches. Every "box" (pb.stomp.box subclass) includes an input buffer, an output buffer and a chain of effects. Buffers provide consistent interfaces, and enable boxes to "connect" to each other just as AudioNodes can. Parameters are adjusted by Pots, and switches are implemented as 3PDT footswitches.

I do plan to implement a package.json for managing pedal connectivity and settings; but it would only be limited to a registered set of pedals, i.e., you should know how to "read" and apply the settings. Of course, for a more generalized solution, it would be better if it pointed to a compiled js along with human-readable settings - as per previous comments, there are an infinite number of combinations you can apply AudioNodes to. An effect package could look like;

{
    "url": "some-repo-in-github/effect.js",
    "settings": {
        "level" : 3,
        "gain" : 4
    }
}
forresto commented 11 years ago

Meemoo apps have a graph json format: https://gist.github.com/3707631 with src and state instead of url and settings. I think that I could make a pedalboard.js module loader pretty easily. Do you want to try patching pedalboard.js modules with Meemoo?

dashersw commented 11 years ago

@forresto haha that's very cool :) Hadn't heard about Meemoo, I'd love to see pedalboard effects in Meemoo.

ruidlopes commented 11 years ago

This is a great idea, I've been thinking about it a lot lately. But please please do follow the well-established patterns devised by VST (http://en.wikipedia.org/wiki/Virtual_Studio_Technology), as it's got 16 years of maturation in this domain.

Why not call this "Web Studio Technology", WST?

paulirish commented 11 years ago

This idea has legs.

So.. Who wants to take ownership on this one and sketch out a proposal?

ruidlopes commented 11 years ago

Relevant: https://dvcs.w3.org/hg/audio/raw-file/tip/midi/specification.html

mattdiamond commented 11 years ago

It looks like Paul caught me parroting my views to the Hacker News crowd. While this thread is more geared toward an open source library solution, a cloud-hosted "Web Instruments" service along the lines of Google Web Fonts would be equally awesome. Perhaps the two could complement each other in some way.

nick-thompson commented 11 years ago

I've been thinking lately of trying to get something similar to this going. Basically like the Twitter Bootstrap of web audio. The idea would be to expose a really simple API, like underscore.js, that just offers a whole bunch of functions for users to quickly get cool sounds going in their browser. So a quick example would be something like...

/* "au" here is the name space I was using in my notes on this idea, stands for AudioUtils (like AudioUtils.js) */ var coolDelayEffect = au.effects.simpleDelay(insert, arguments, here); coolDelayEffect.connect(context.destination);

And what I would strive for is to have this library return AudioNodes that have just been prepared nicely for the user. This might entail some frowned-upon javascript, though. Here's an example:

au.effects.simpleDelay = function (these, are, params) { var input = context.createGainNode(); // this node will be returned to the user after connections are made var leftDelay = context.createDelay(); // this is part of the graph that will be hidden to the user .... input.connect(leftDelay); .... var output = context.createGainNode(); leftDelay.connect(output); // connect similar nodes in this effect's graph to the output ... input.connect = output.connect; return input; }

So these last two statements are the messy part. You would build this little segment of an AudioNode graph in this function call, define an input and an output node, but return just the input node. But the input node's connect() function is actually the output node's connect() function, so that when you call connect on the node returned, it preserves the integrity of the AudioNode graph built by the effect function. Does that make sense? I might have explained it poorly.

This would be the general approach to the whole library, because I wouldn't to completely abstract the audio api itself. I think a library that can be integrated seamlessly into someone's pre-existing web audio project would be really cool. Anyway, in the example above, you'd also have to attach a method to the input node that's returned which retrieves a list of nodes in the effect's hidden graph, in case the user wanted to run automations on any of the audioparams therein.

What do you guys think? Is that little bit of sticky javascript a deal-breaker? I would be happy to write a more thorough spec to better explain my ideas here as well. And this library would hopefully be community driven, so anyone can add their own effect or instrument. Maybe there would even be a little CLI tool where you can install someone's instrument patch on github to the AudioUtils.js file that you're using in your project. I'm just rambling / brainstorming now...

nick-thompson commented 11 years ago

Because I can explain better with code than with words: http://jsfiddle.net/mBaG3/3/

ruidlopes commented 11 years ago

Also relevant, the Web Audio part of the Google Moog Doodle: https://code.google.com/p/bob-moog-google-doodle/

The interfaces present in the source code are artifacts of the Flash fallback we implemented, but provide good hints on implementing/interfacing full-JS audio nodes.

mattdiamond commented 11 years ago

I just came across the @wapm (Web Audio Package Manager) project, looks like it's something @jsantell just started up. This could possibly provide a really interesting infrastructure for Web Audio projects.

Edit: Looks like I provided some inspiration for the SimpleReverb module... awesome.

jsantell commented 11 years ago

@mattdiamond was gonna post here for feedback after cleaning up the service a bit in regards to a "spec". Wanted it pretty unopinionated to be usable without any framework for both effects and instruments. Originally approached it as a wat to also include html/CSS and widgets so each module can optionally have a "soft synth" UI ( like FruityEQ2 as an extreme case) but then how would that be managed with the package manager, should it even support that etc.. totally will post more info in a week or so when its more functional for feedback :)

forresto commented 11 years ago

Here is a nice library of Web Audio effects: https://github.com/Dinahmoe/tuna And a web-based patching interface for playing with FM synthesis (and those Tuna effects): https://github.com/forresto/dataflow-webaudio

ruidlopes commented 11 years ago

My contribution to this discussion: https://github.com/ruidlopes/Rack, an extensible, modular guitar effects rack.

cristiano-belloni commented 11 years ago

I'm working on an open Web Audio Api plugin host: http://kievii.net/k2h.html Communication between plugin is done with OSC. A first technical guide to plugins is here: http://bitterspring.net/blog/2013/02/17/kievii-host-plugins/ Would love to receive some feedback!

cristiano-belloni commented 11 years ago

@mattdiamond new site (hya.io) launched. Basically making a specification for an interface between an host and plugins: http://hya.io/docs.html Still work in progress - let me know if you're interested.

jsantell commented 11 years ago

About time to release this..

me and @nick-thompson have been working on a spec for making interoperable audio components (effects, tools, soft synths, source nodes), riding on top of component.io.. check out the registry site, which contains live demos so you can try before you buy, so to speak:

component.fm

We still need additional help and views on finalizing more specs for the tools and source nodes, which we just launched a discussion group just for that, as well as getting started and any questions!

Additional links:

Theodeus commented 11 years ago

Sorry, I went bananas and thought something about everything in the discussion group. Please take it all as not-so-well thought through suggestions in the spirit of excitement over your proposal. :)

jsantell commented 11 years ago

@Theodeus just saw, looks awesome :) excited to open it up to the public for feedback on a well needed, agreeable spec to share what we make

randylubin commented 10 years ago

Hi!

I'm just getting into WebAudio and thought about creating a plugin along similar lines as to what @mattdiamond proposed - especially easily loading / switching instruments. My search led me to this thread...

Any update over the past 7 months?

lonce commented 10 years ago

I have created a toolset, jsaSound, for building sound models with the Web Audio API where all models export a standard user-level API for web developers. Check it out here: http://animatedsoundworks.com:8001/ It's all up on github, too.

danigb commented 8 years ago

Hi,

Probably I'm late to the conversation, but I've created https://github.com/danigb/web-audio-assembler that may be relevant to your discussion (if I understood well).

stale[bot] commented 5 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

stale[bot] commented 5 years ago

This issue has been automatically closed because it has not had recent activity. Thank you for your contributions.