petersalomonsen / javascriptmusic

A.K.A. WebAssembly Music. Live coding music and synthesis in Javascript / AssemblyScript (WebAssembly)
https://petersalomonsen.com
GNU General Public License v3.0
387 stars 33 forks source link

Feedback, features propositions, more questions (time, duration, seeking, audio samples, etc) #6

Open Catsvilles opened 4 years ago

Catsvilles commented 4 years ago

Hi, I know, I know, last time I was after node.js version but it was before I realized that actually WebAssembly does everything I would ever need, and we can render and play audio server-side in node.js+sox same as in browser, WASM should be truly pronounced as AWESOME! :) Last few days I spent creating and live coding music in the browser and I got few ideas and feature requests, help offers:

  1. Tracking current time while playing the song, calculating full song duration, seeking: Now, for the first one I know there are already few attempts to track time with logCurrentSongTime() in pattern_tools.js, but unfortunately I could not hack it to work good. I found another way that works good - tracking currentTime of global.audioworkletnode.context. Now, in my experience developers use setInterval() function a lot, when making music players with JS, etc but I'm sure there are better ways to dynamically update and log current time while playing. I would dig more into logCurrentSongTime() function and check how Tone.js does things, they have Transport class with .scheduleRepeat ( ) function, from what I know they use time precise clock of WebAudio, not JS one. More about this: here In any case, one way or another getting current time while playing should be fairly easy to implement, but getting full duration and seeking, this is where I'm limited in ideas, I just believe that it would be a cool feature and good user experience to get full song duration after we evaluate the code, then we could have a simple horizontal slider which scrolls while song is playing, and we can update the current position. This would be a cool feature, so we don't have to wait one part playing through when we actually wanna heart next patters. What do you think? With your blessings and guidance I'm ready to get on this one and submit PR as soon as I hack it together. So, you could just give me your thoughts and theory behind it, I will do the coding if something. :)

  2. Audio Samples. Okay, this one is straight feature request, I remember you mentioned something about this in our previous issue discussion, I'm sure you are already planning those in any case, just wanna mention that it would be cool to have this freedom of adding custom audio samples, for cool kicks, drums, atmos, percussion, etc. And also would be cool if this would work same as whole AssemblyScript synth, both in browser and node.js+sox for quick audio rendering, so I guess we would have to implement custom AudioBuffer/AudioBuffersource in typescript, without using browser's AudioContext? Once we have AudioBuffersource it should be fairly easy to implement something like Sampler , allowing the user to play multiple samples with predefined pitches. I'm ready to help with this one too, let me know what you think!

  3. Now, a bit of a feedback of a user who tried to create some music in the browser... Mixes... oh, I fairly know JS, never touched TypeScript before, and overall doing DSP stuff, making instruments so you can later code patterns with them, oh, this is quite exhausting for unprepared user :D I guess the current system holds quite high entry threshold for newcomers. Maybe it would be possible to implement some kind of presets of ready instruments/effects like other live coding environments do (read Sonic Pi)? I know, I'm probably over my head here, sorry, just sharing my thoughts. :)

  4. I decided to forget about other ways of live coding music, like on server with node.js, and stick with AssemblyScript implementation you proposed but still I'm thinking that it would be cool to have some freedom of using other synthesizers with current sequencer and interface. For example, there is WASM version of Yoshimi I believe we could use it too, for composing in browser and rendering the audio on server with sox. Just, from what I understand the current sequencer is coming after your 4klang experiences, would it works with other synths, I mean, would it be possible overall? What do you think?

Thanks for reading, sorry, for that much of text, I hope I'm not being too much annoying here! One more thing, you should finally name your project, so people can be like:

insertnamehere is Awesome!!! Please continue developing and improving it!

petersalomonsen commented 4 years ago

Thanks for nice feedback :-)

  1. The best source of time is here: https://github.com/petersalomonsen/javascriptmusic/blob/master/wasmaudioworklet/synth1/assembly/index.ts#L46 The "tick" is used by the player to look up which pattern to play and the position in the pattern.

The sequencer timing happens inside the WebAssembly synth based on actual samples rendered, unlike my experiments with nodejs, where I use setTimeout for scheduling when to play notes and send midi signals to the midi synth.

If you want to have a go at this, you'd have to expose the tick through the AudioWorkletProcessor. Send a signal through the message port to the AudioWorkletNode and then it can be used from a frontend JS api.

  1. Audio samples is on my todo list. I'd like to record vocals and acoustic instruments. My plan / idea is to have uncompressed sample data in the WebAssembly memory. And so it will be easy for a "Sampler" instrument to play back sample data and it could be used in combination with other effects in the synth such as reverb, and even combine with synthesized instruments.

So this would have to be done in AssemblyScript, and it would have to be interfaces for filling memory with sample data. Probably also some functions in JavaScript to decode/decompress audio formats when importing samples, and also functions to store sample data when not in use ( e.g. to indexedDB ). I'm also thinking of this in combination with my plan to implement a GIT client using the wasm-git project, where there's also a virtual filesystem backed by indexedDB.

  1. Yeah I want to simplify the instrument creation part. Possibly by making a simple macros that translates easily to assemblyscript sources, close to what 4klang does. Here's a script for a 4klang instrument: https://github.com/petersalomonsen/javascriptmusic/blob/master/4klang/instruments/BA_SawBass.inc

And you can see more examples of how to generate sounds here:

https://www.youtube.com/watch?v=wP__g_9FT4M https://www.youtube.com/watch?v=1nEcbAgRPtc&t=29s (Amiga klang)

Also yes I want more presets. I have some instruments already in the sources, but I think it would be easier with 4klang style macros, and it would be even possible to import instruments from 4klang.

  1. Thanks for the link to the wasm port of Yoshimi. I did use Yoshimi (ZynAddSubFX) for my nodejs experiments. I think actually the 4klang sequencer is not fit for use with Yoshimi, it's better with a midi sequencer, so I'd like to port my nodejs midi sequencer stuff to the web. Will definately have a go at this!

Naming... yes ... not sure what to call it yet.. Maybe something in the direction of w-awesome, like pronouncing WASM :)

Catsvilles commented 4 years ago

@petersalomonsen thank you for the reply! :)

Yeah I want to simplify the instrument creation part. Possibly by making a simple macros that translates easily to assemblyscript source

My 5 cents here is since the project is targeting JS devs, I guess it make sense to make it in json format, so users can easily, write, save .json files/patches, should be a good UX :)

At the moment I cannot decide what I would like more to start working on, 1. time, duration, scroll or 4. Adding and making music with Yoshimi, asap :) Will probably work on this and that, and we'll see where it will take me.

so I'd like to port my nodejs midi sequencer stuff to the web. Will definately have a go at this!

Do you have any thoughts on the timeline, when you would be working on this? Also, I guess makes sense to use something like https://github.com/jazz-soft/JZZ as they claim to support both node.js and browser?

petersalomonsen commented 4 years ago

You just put Yoshimi on top of my list :) I had to test it yesterday. Managed to build it, and will start looking into integrating it. Those sounds are just amazing!

Catsvilles commented 4 years ago

@petersalomonsen Hah! I'm in the same mood, playing with it right now, these pads sound amazing, perfect for my darkish ambient things, don't understand why it's not much more popular! It's not a problem to play it in browser with JZZ midi library, but I cannot make it work and output audio in node.js, following your example. I understand now, that I could just use the provided glue code and abstract away AudioWorklet somehow, to pipe it to SOX, but meeh, cannot wrap my head around it, never worked with .wasm in node.js before, need your help with this! Really hope you will show some examples on how to pipe it to SOX! :)

Catsvilles commented 4 years ago

Interestingly, I also was able to play the AudioWorklet browser version directly from node.js env by sending midi directly to browser. JZZ library does provide what it promises, it works same both in browser and node.js! Just, from what I noticed overall these days, there is still big performance difference, when I play audio from node.js piped to SOX, and AudioWorklet, SOX loads my CPU at 20%-25% rate while AudioWorklet version is at 45%-50%.

petersalomonsen commented 4 years ago

I'm looking into the pure browser based approach. Managed to control Yoshimi from javascript here (click the play song button )

https://petersalomonsen.github.io/yoshimi/wam/dist/

this is a song I wrote in the nodejs setup, now translated to web. so I think this will work fine with the live coding environment. I think for the web it's probably better to export wav directly from the web page instead of using sox.

petersalomonsen commented 4 years ago

yoshimi support is here: https://github.com/petersalomonsen/javascriptmusic/pull/8

Catsvilles commented 4 years ago

Cool!! Was playing around it, sounds very good, but it's kinda hard to grasp how the sequencer works, for example, in provided song example, I could not find a way to hold the pad notes longer than it is now, how could I hold the note for 5 or 15, etc, seconds? I tried to adjust steps per beat but if the number is bigger the notes play actually even faster, I even set the number to negative -4 and only then I got my long, ambient pads, too bad they seem not to release and play infinitely :D

petersalomonsen commented 4 years ago

regarding note lengths consider this example:

// SONGMODE=YOSHIMI
setBPM(80);

const lead = createTrack(
                  5, // midi channel
                  2 // default duration in steps per beat ( lower is longer ) 
            );

await lead.steps(
  4, // track resolution in steps per beat
  [
    d5(1/8),,,, // first parameter of a note is the note duration
    f5(1/4),,,,
    a5(1),,,,
    f5,,,,
]);
loopHere();
petersalomonsen commented 4 years ago

demo video: https://youtu.be/HH92wXnP4WU

Catsvilles commented 4 years ago

@petersalomonsen Hey, it's working very good, it's possible already to compose whole songs! Thanks for your previous reply and explaining things! I was following the whole progress, looks like what's left now is to implement .wav rendering and adding audio samples, or merging Yoshimi with AssemblyScript version for those.

Meanwhile, I was working on Time, Duration, Seeking and have a bit of success with the first two in AssemblyScript mode. but I guess I'll wait when you will finalize the whole version so I can proceed with those. I noticed that logCurrentSongTime() actually calculates the whole duration of the current song, so I guess something like this is needed for OfflineAudioContext in Yoshimi version. Still have no idea how to proceed with Seeking but I'm really excited to get on it, it's my first experience in participating/building such musical software so I will be happy to get it done.

petersalomonsen commented 4 years ago

The xml file is for controlling the whole Yoshimi synth. If you load multiple instruments, you'll see that they all appear in the xml file.

For OfflineAudioContext in the Yoshimi context there's no worry about the timing, as the sequencer player is embedded into the audioworklet. So it's just to start it and it will trigger the midi events from within the audioworklet processor.

petersalomonsen commented 4 years ago

a little demo:

https://petersalomonsen.github.io/javascriptmusic/wasmaudioworklet/?gist=29d0b6f8e8d3cf3267ae4b7b4ffa49bf

petersalomonsen commented 4 years ago

export audio to wav, work in progress:

https://github.com/petersalomonsen/javascriptmusic/pull/15

already made an export that is sent to spotify:

https://twitter.com/salomonsen_p/status/1261404943484772353?s=20

petersalomonsen commented 4 years ago

Export wav implemented now.

vitalspace commented 4 years ago

Hello Peter Excellent Project.

I was wondering if you have in mind to build a documentation about your project? I have spent the last 3 days trying to understand how your code works and until now it is difficult for me to understand how the sound is generated through Assemblyscript. I'm so used to creating synthesizers quickly in TONE.js, but I guess I'm pretty noob for your code >:'c.

petersalomonsen commented 4 years ago

thanks @HadrienRivere :-)

The plan is to add more documentation and tutorials, but it's great to get questions so that I get an idea about what is unclear to others.

I don't know TONE.js very well, but after checking it quickly I think the main difference is that in TONE.js you declare the properties of your synth ( envelopes, waveforms etc ), while in my project you calculate every signal value that is sent to the audio output. So it's more low level, but also much more control.

I've tried to make a minimal example below ( and more explanation follows below that ). Try pasting in the following sources:

sequencer pane (editor to the left) - javascript:

// tempo beats per minute
global.bpm =  120;

// pattern size exponent (2^4 = 16)
global.pattern_size_shift = 4;

// register an instrument
addInstrument('sinelead', {type: 'note'});

playPatterns({
  // create a pattern with notes to play
  sinelead: pp(1, [c5,d5,e5,f5])
});

synth pane (editor to the right) - AssemblyScript:

import {notefreq, SineOscillator} from './globalimports';

// pattern size exponent (2^4 = 16 steps)
export const PATTERN_SIZE_SHIFT = 4;

// beats per pattern exponent (2^2 = 4 steps per beat)
export const BEATS_PER_PATTERN_SHIFT = 2;

// create a simple oscillator ( sine wave )
let osc: SineOscillator = new SineOscillator();

/**
 * callback from the sequencer, whenever there's a new note to be played
 */
export function setChannelValue(channel: usize, value: f32): void {
  // set the frequency of the oscillator based on the incoming note
  osc.frequency = notefreq(value);
}

/**
 * callback for each sample frame to be rendered
 * this is where the actual sound is generated
 */
export function mixernext(leftSampleBufferPtr: usize, rightSampleBufferPtr: usize): void {  
  // get the next value from the oscillator
  let signal = osc.next();
  // store it in the left channel
  store<f32>(leftSampleBufferPtr, signal);
  // store it in the right channel
  store<f32>(rightSampleBufferPtr, signal);    
}

The key to sound generation is the mixernext function that is called by the Audio renderer for every sample to be sent to the audio output. It expects you to store the signal at the provided addresses for left and right sample. As you can see I take the next signal value from the SineOscillator, which calculates its value from a math sine function and for which the frequency is set by the setChannelValue function that will be called by the sequencer whenever there's a new note to be played.

In my more advanced examples I mix more instruments, and also make them richer by mixing ( adding ) waveforms, applying echo and reverb. So what is different from TONE.js I guess is that in this projects you set up the math and calculate every sample for the audio output, rather than just declaring the properties of your sound. Still I think it can be done quite simple this way, giving you much more control, and I'm also working on reducing the amount of code required.

Hope this helps. Let me know if I pointed you in the right/wrong direction here :-)

Catsvilles commented 4 years ago

@HadrienRivere Just my 2 cents here, since I'm following Peter's project and all the JS audio topic. This is just my experiences starting with this repo of a JS guy, who before just had experience with Tone.js / Web Audio but not much of low level DSP stuff.

I think when starting making music with "javascriptmusic" (we still need good official name for this :D ) you should NOT try doing things like you did with Web Audio/Tone.js but think of terms classical DSP and how music instruments/plugins actually work. Yeah, it also involves a bit of math, which may scare regular JS guys like me, clearly Peter has a more advanced experience as an audio and software developer overall, so for him the things he does in his code are quite easy and obvious and would be probably done the same way in any other language like C/C++, Rust etc. Yeah, I think this is important to NOTE, again, even though Peter uses JS/Typescript overall it's not bound to the browser that much and those algorithms could be used the same way in any other language/stack, while manipulating audio with Tone.js/Web Audio kinda is limited to the way we do audio in browser only.

I hope I did not confuse anyone, I'm quite a beginner too in this field, and it took me a while to realize how DSP and music programming should be actually done. It really helps seeing more low level projects written in C/C++, you may not know those languages but it helps understanding why the things are done the way there are, also it's really fun seeing how years ago people have been writing code in complex low level languages, while nowadays you can do same thing in something like JS/Typescript. Yeah, WASM is truly AWESOME! :)

petersalomonsen commented 4 years ago

@Catsvilles thanks :) And BTW: I'm using wasm-music as a working title now (which is used for the npm package that I use for deployment to my website https://www.npmjs.com/package/wasm-music ). Does wasm-music sound ok? - given that it's pronounced as it should :)

Catsvilles commented 4 years ago

Does wasm-music sound ok?

Yeah, I think it's suitable name for this project! :)

Catsvilles commented 3 years ago

@petersalomonsen Hey Peter, as promised I started putting together a list of PadSynth implementations for an inspiration, I guess. :) I was pretty sure there are few made with Typescript but unfortunately for some reason I could find only JavaScript ones for now. Instead of creating a new issue here I decided to go with a new repo as already for a long time I wanted to put together list of cool things related to Web Audio :) https://github.com/Catsvilles/awesome-web-audio-api-projects

Also I found something in Rust if this is any help: https://github.com/nyanpasu64/padsynth

Actually last few months I was actively getting into Supercollider (they even have VSTs now) but I'm missing doing things with JavaScript, so maybe someday I will go on exploring this project and trying to synthesize and sequence sounds with JS and AssemblyScript :)

petersalomonsen commented 3 years ago

Thank you for this @Catsvilles . The padsynth was a really good tip, and I really don't know how I haven't heard about it before. I've obviously been using it in Yoshimi/ZynAddSubFX, but there I just used the presets.

After studying the algorithm I found that I've done some of the same when synthesizing pads myself. The concept of spreading out the harmonics is something I've done a lot, but randomizing the phase as done in padsynth improves it a lot, and that's the part I've been missing.

One downside of the padsynth algorithm as it is described is that it requires to precalculate the wavetable, which results in quite a delay when changing parameters. So I started exploring an alternative approach that doesn't require any other precalculation than the periodic wave. And instead of having the harmonic spread in the IFFT, I play the periodic wave at different speeds. As far as I can see this gives the same result, it costs a little more in real time calculation, but the startup time is significantly reduced. Also because this way I can have a much smaller FFT giving the same result.

My first attempt is a simple organ, check out the video below. I have to play a bit more with this to see if I'm on the right track, but so far it seems ok :)

https://youtu.be/YgLuv1IKMQs