swesterfeld / spectmorph

SpectMorph: spectral audio morphing
http://www.spectmorph.org
GNU Lesser General Public License v2.1
64 stars 5 forks source link

Audio file support / DAW Integration #3

Closed dilom closed 5 years ago

dilom commented 6 years ago

Is there a way to use .wav files as the instruments source? On the same note it would be great to be able to select channels from a DAW like Ardour as instrument sources even if only via jack. Any plans to incorporate those features in the future? This is going to make this plugin a must for all linux musicians!

swesterfeld commented 6 years ago

Yes, it would be cool to have one or both, but neither is a "small" addition.

(1) using wav files

The only way to use wav files right now is to create an instrument from it first, and use that. Its not very easy to do that. Basically you need to use the command line tools (smenc, smwavset, smtool,... or the meta script sminstbuilder) to create a smset file, which contains multiple .sm files, which you need to build from .wav files. Since typically you want loops, and since setting the loop points is best done at the ui, you also need sminspector to set the loop type and loop point. Additional steps like correcting the tuning of the material or normalizing the volume need to be done.

All in all, that is not very easy and requires more knowledge than we can expect the average user to have. So it would indeed be nice to have a WavSource (or similar) that can be used in plans. It would have to provide UI at least for loop points, but maybe also for tuning correction, volume normalization, using different samples for different midi notes and so forth.

As an alternative, we could make instrument building easier, so that normal users could create smset files without too much knowledge of the internals.

(2) DAW integration

SpectMorph currently requires you to convert each input file to a sm file before using it. This takes quite a lot of CPU time, but only needs to be done once. Now if we allowed realtime input into SpectMorph, for instance from somebody singing with a microphone, or some channel in a DAW, we would need to analyze the audio on the fly. It would be cool to be able to do it, but it would probably require rewriting part of the analysis code for realtime use, and even then it would be CPU intensive. Also note that SpectMorph requires single notes, so if your audio material has chords or even slightly overlapping notes, it would probably cause problems.

Also stuff like auto tune (if required) would have to be done in realtime then.

swesterfeld commented 6 years ago

Starting with SpectMorph-0.4.0, I've tried to make it somewhat easier to use sminstbuilder - the tool I use to build instrument files - by making the tools a bit easier to use, and especially by providing an example instrument here:

https://github.com/swesterfeld/spectmorph-trumpet

Custom instrument building could still be better supported, but this at least should get you started.

swesterfeld commented 5 years ago

SpectMorph 0.5.0 ships with a new graphical instrument editor (youtube: https://youtu.be/JlugWYPDp84). So this means you can now load & morph your own samples. So it implements (1). So I'm closing this bug.

As already said direct input audio streams (2) is probably not possible.

codingisnuanced commented 2 years ago

Also note that SpectMorph requires single notes, so if your audio material has chords or even slightly overlapping notes, it would probably cause problems.

@swesterfeld What do you mean by this exactly? I'm assuming using a wav file that changes in fundamental frequency will be "hard" (whatever that means) to analyze. How difficult would it be to implement analysis over small blocks of a wave file to support changing evolving sounds? Maybe, that's what is done already, but do tell. Also, how is an "audio material that has chords" really different from one that it characteristic, i.e. has quite the timbre? After all, SM claims to be able to blend timbres together.

swesterfeld commented 2 years ago

How difficult would it be to implement analysis over small blocks of a wave file to support changing evolving sounds?

SpectMorph is already using small blocks to support evolving sounds. Small changes to the fundamental frequency are perfectly ok (like vibrato) and SpectMorph will track them and morph them. So morphing a sound with vibrato and a sound without vibrato will result in a sound with less vibrato.

If we wanted to support bigger changes in fundamental frequency, it would probably be necessary to use fundamental frequency detection either before SM analysis or after it. But the question is how useful it would be to morph a sound that jumps up two semitones with one that remains at the same level. Should a 50% morph result in a sound that jumps up one semitone? But then most morphs would have some non-integer result so not very tuned to our ears.

"audio material that has chords"

The analysis algorithm looks for peaks in the spectrum. If there is enough space between them, this works well. So if you have peaks in the spectrum at frequency 440, 2*440, 3*440 there is some space between the peaks, and the algorithm can detect and track the peaks. But now if you have an a minor chord you will have peaks at 440, 2*440, 3*440... 523, 2*523, 3*523... 629, 2*629, 3*629...

As you can see, the spectrum is much more cluttered in this case. You could still get good results if you made the analysis window longer, but then the time resolution would be diminished. Also morphing maps frequencies of one input sounds to frequencies of another input sound, which would be problematic for chords because there is no intuitive mapping here (unlike for simple sounds like classical instruments / human voice and such which work well in SpectMorph).