wavesjs / waves

[deprecated]
http://wavesjs.github.io
BSD 3-Clause "New" or "Revised" License
52 stars 4 forks source link

I would like to volunteer to assist in this project in any way that may be helpful #17

Open mcanthony opened 9 years ago

mcanthony commented 9 years ago

First of all great job guys, I have been monitoring this project very closely since the initial WAVE project proposal was announced and have since been very inspired by this and other IRCAM projects. I have learned quite a bit due to these works and I feel there are many gems of audio processing/vis hidden in the WAVE project and other related WebAudio sub-projects/experiments that have come out of IRCAM. I feel that even though it is still early in development, these works seem to not get the attention they deserve and are underutilized by the public. Over time I aim to remedy that situation by spreading the word and by producing various demonstrations/tutorials showcasing the many excellent resources that you guys have made available to us, Hopefully I will be able to convey the unparalleled quality of research, planning, and implementation that you guys have achieved.

I would like to begin getting fully acclimated with every facet of the projects various components as they are rolled out.I thought perhaps the best way to for me get fully acquainted would be to assist in generating documentation as well as to review/possibly improve upon any existing documentation.

What components are currently in a usable state that are also in need of documentation? I noticed the links for ui, and some other docs have been broken for awhile and that maybe I could help with that as I am pretty good with writing documentation. Where would you suggest I start? Does the documentation published here reflect everything that has been written to date?

I am currently working on (just exiting the planning/research/sourcing phase) a proper visualizer for audio spectra, is this something that you guys have already began work or planning on? The approach I have chosen is to leverage WebGL/GLSL for rendering and perhaps some aspects of processing audio spectra in order to achieve a fluid pan-zoom spectrograph interface with axis markings and segment annotation support comparable to what is offered in Sonic Visualizer. My aim is to upstream the resulting work to hoch/spiral-spectra as previously discussed with @hoch (of the WAAX/Spiral projects, and the WAA-WG). If it would be a useful addition to this project I will implement it in accordance with the WAVE project standards and ensure it fulfills any WAVE specific requirements.

The eventual milestone I have in mind is to leverage Halide to produce highly optimized image processing code in NaCl/GLSL (with ASM.JS fallback) for various DSP techniques which have shown to be useful/effective in Audio processing and segmentation, with the intent to build spectral editing features into the visualizer that would work in conjunction with local and/or remote MIR/Audio Feature Extraction systems. I hope to produce a workflow and results similar to that of Izotope RX series software.

For convenience allow me to summarize with an enumeration of my inquiries:

  1. What is the status of the four primary modules of the WAVE project (UI, LFO, Loaders, Audio)? 1-a. What is the status of the modules respective documentation and where might I be of assistance in this area?
  2. Is there currently a spectral audio visualizer component in the works or slated for development in the near future? 2-a, Would this project benefit from such a component as I have described?
  3. Are there any additional goals slated for this project which have not been outlined by the initial WAVE proposal? 3-a. How else might I be of assistance to this projects current and/or future goals?

Regards, Michael A. Casey

ouhouhsami commented 9 years ago

Hi Michael,

Thanks for your message. As you see, there is a lot of space for improvment in our lib. You point important points and you could definitively work on it. We just merged the develop branch on the master and we need to work on the documentation - we will use an ES6 doc compliant module https://github.com/esdoc/esdoc. So, I would say, as soon as @b-ma is OK with our documentation process, we would appreciate a lot PR on the documentation side as a way to understand how the lib works (point 1 in your message). For point 2, this could be one of the next step (with a lot of performance issues to deal with, SVG vs Canvas, WebGL ... but with). And for point 3, we could discuss it later I think.

Another important point for us are the lack of unit tests in the audio part. The unit test and coverage are quite good for the UI part, and we need to port them to the audio part. So, don't hesitate to try to run them and give us feedback where it fails.

Cheers,

Samue

b-ma commented 9 years ago

Hey Michael,

Many thanks for your attention on the project, and, as @ouhouhsami said, any help would be really appreciated (especially with the docs).

Just to add some precisions to the two first points:

1. In a general way, we decided to use es6doc as our tool for documentation, I did a few test and it works quite well. Writing all the doc directly inside the code looks like the more maintainable approach we found until now (the only problem we still not solved, is how to include the examples from the examples folder right inside de final document, but it is not a priority). Also we will use the gh-page branch for each module, instead of the organisation repo (which will be just used as a common hub at the end).

2. We created a proof of concept for a sonogram that can be seen here, and of course having such a component would be very appreciated (a working solution in WebGL would be very nice imo). However, I think that one of the main problem we have to solve in the first place, is to deal with all the browser inconsistencies when embedding a <canvas> into the <svg> structure with a <foreignObject> (and there is a lot of them...). Also, the place and the articulation of such a component with the UI module is something that must be discussed further (plugin, external ?), because much a the work that must be done in order to display the sonogram is not strictly a part of the visualisation process (the example linked before uses an LFO chain inside a Shape).

Thanks !