Open adriankastrati opened 9 months ago
For the audio library, how about Howler.js? https://howlerjs.com
@jespermunkeby Is it not better to just have send the node to one endpoint and have the backend validate it. So just a fetch with a body similar to this { node: signalNodeData nodeType: "Signal }
found some other libraries that we can use, the thing with them is that they have some more capabilities. Some notes on these, I know @jespermunkeby worked with Tone.js so we can check with him what he though about it but I think that Tone.js or Web Audio API is the way to go if we want to have more features:
Pizzicato.js aims to make working with the Web Audio API simpler. It wraps the API in a more convenient interface and provides useful utilities for common tasks, like applying effects and filters.
var sound = new Pizzicato.Sound({
source: 'wave',
options: { type: 'sine', frequency: 440 }
}, function() {
var reverb = new Pizzicato.Effects.Reverb({
time: 0.8,
decay: 0.5,
reverse: false,
mix: 0.5
});
sound.addEffect(reverb);
sound.play();
});
Tone.js is a powerful library for creating music in the browser. It's built on the Web Audio API and offers abstractions to create instruments, effects, sequencing, and more.
const synth = new Tone.Synth().toDestination();
const now = Tone.now();
synth.triggerAttackRelease("C4", "8n", now);
synth.triggerAttackRelease("E4", "8n", now + 0.5);
synth.triggerAttackRelease("G4", "8n", now + 1);
The Web Audio API is a high-level API for processing and synthesizing audio in web applications. It provides granular control over audio rendering but can be complex for simple tasks.
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var oscillator = audioCtx.createOscillator();
oscillator.type = 'sine';
oscillator.frequency.setValueAtTime(440, audioCtx.currentTime);
oscillator.connect(audioCtx.destination);
oscillator.start();
With Web Audio API we can implement some more cool stuff, as spatial sound, vibrato/tremolo and stuff like that to, lets say guitar and voice.
Really hard for me to say
@jespermunkeby Is it not better to just have send the node to one endpoint and have the backend validate it. So just a fetch with a body similar to this
{
node: signalNodeData
nodeType: "Signal
}
No I don't think so. I think that signal/node-types should be frontend concepts only. If we decouple them this way the api makes less assumptions and becomes much more flexible in case we want to experiment with building other products. There is more to say here too about how we handle separation between what should be done on client and server. We can discuss this further in person!
Great job with the audio alternatives. My experience from webAudio was that it's a bit too low level. I used Tone.js for our original beatroot app, it was simple but not fantastic by any means. Let's maybe create som every simple POCs with each one of merge for example, and see which one we like the best? I suspect both Tone and Pistachio are built in top of webaudio, so we can probably tap into webaudio stuff if there are particular parts where we need more control.
I think this also depends on who will work with the audio stuff on the frontend mostly. Let's discuss it soon
[ ] New endpoints for split, prompt-edit and prompt-generation
[x] Job logic in frontend
[ ] merge will be done in frontend
[ ] Audio persistance
Continued on this instead: https://github.com/Skein-studio/front-end-webapp/issues/39