bmcfee / muda

A library for augmenting annotated audio data
ISC License
230 stars 33 forks source link

Use MUDA without audio file #38

Closed mcartwright closed 4 years ago

mcartwright commented 7 years ago

In my current project, my jams files completely specify my audio, and at training time the audio can be synthesized from the jams file. I'd ideally augment these jams files with muda without having to synthesize them first, and then I'd simply save the muda-augmented jams files. At training time, I'd synthesize the audio and process the muda deformations.

Is it possible to use muda without passing the audio initial audio file? It seems right now, if I don't use muda.load_jam_audio() to process my jams file (it just adds the empty history and library versions to the jam?), it errors when I call the transform method of my pipeline.

Is there a reason muda needs the audio file before actually processing the audio?

bmcfee commented 7 years ago

I think I don't 100% understand your use case here. Is the problem that the audio does not live on disk?

If so, it wouldn't be hard to modify muda to support in-memory audio data that's dynamically generated. Everything after that should be pretty easy.

mcartwright commented 7 years ago

No, it's not the audio does not live on disk but rather it does not exist yet. Regardless of this point... the main thing is that I think it could be useful to to decouple the transformation of the jams files from the transformation of the audio... and then be able to load the transformed jams file at a later point in time and process the audio then.

This could be useful if you are "statically" augmenting a dataset (rather than dynamically at training time) in which you want to save the augmented dataset to use for training with another learner... but you don't want to save all of the processed audio files because of disk constraints.

Once these two are decoupled, it also doesn't seem necessary to provide the original audio file or signal in order to simply transform the jams.

bmcfee commented 7 years ago

Okay, thanks for clarifying. I think I understand your point now. (I think loading from in-memory audio is a good idea too though, and should be a separate issue.)

I get a little nervous about decoupling things in this way, but I do like the idea of delayed audio deformation. The current design makes this almost possible, since all transformation state is logged in the jams sandbox. Something like the following:

Then you should be able to do something like the following:

>>> # Build a muda pipeline
>>> for jam_out in pipeline.transform(jam_in, delayed=True):
            jam_out.save(SOMEWHERE)
>>> # Some time later
>>> jam_reload = jams.load(SOMEWHERE)
>>> audio_out = muda.rebuild(jam_reload, audio_in)

muda.rebuild would need to do the following:

Sound reasonable?

mcartwright commented 7 years ago

Yeah, that sounds perfect.

bmcfee commented 7 years ago

While reviewing #40 , I realized a slight snag in this plan. The transformer in question needs there to be an audio buffer in place to calculate noise clip durations and generate the state object.

This might require some careful thought to get right.

bmcfee commented 6 years ago

Coming back to this having implemented a prototype of the rebuild op in #62 , and reviewed the current suite of deformer ops. TLDR: i think this isn't gonna work in general, but you might be able to hack around it to get something going.

As mentioned above, many of the deformers require access to the audio signal to determine their deformation states, notably pitch-shifting requires a tuning estimate, and background-noise requires a sample-accurate duration count. These things really do need to be pre-computed so that the deformations stay as a deterministic, self-contained set of functions of the state variables. I don't see a good way around that (eg by delayed computation).

@mcartwright if you're still interested in this, how hacky would it be to pre-compute a dummy signal for generating the deformation jamses, and then use the rebuild functionality afterward on real data?

bmcfee commented 4 years ago

I think this is as fixed as it's going to get, having merged the replay() function.