Open Jonathhhan opened 2 years ago
I guess it works now...
Without trying it, I would say we could connect it to a synth with ofxMidi or generate audio directly with ofxPd.
@danomatika yeah, I already connected it with midi out from ofxMidi. ofxPd would be nice, too. Often it sounds quite random, but I can also hear some musical structure. Have to play a bit with the training settings...
Looks like you need to track the duration for each pitch and send the noteoff individually. That might give you better rhythm and structure on the output.
It is interesting, that the output seems to get more structure after running for some time. But maybe that is, because I choose a more or less random sequence for the beginning...
I added velocity to the model (and added the edited notebook to the example): https://github.com/Jonathhhan/ofxTensorFlow2/tree/music_generation_example/example_music_generation_2 And actually wonder, how to implement the music transformer: https://colab.research.google.com/notebooks/magenta/piano_transformer/piano_transformer.ipynb
i tried to use this model for music generation: https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/audio/music_generation.ipynb#scrollTo=1mil8ZyJNe1w
its very rough yet, and not sure if everything works as expected. maybe you can find something wrong... especially my replacement for
tf.random.categorical(pitch_logits, num_samples=1)
could be wrong. the model is very small, so it is included in the example... https://github.com/Jonathhhan/ofxTensorFlow2/tree/music_generation_example/example_music_generation