Quick and dirty tool to convert 4-part MIDI arrangements to Blob Opera format JSON.
Blob Opera is a "machine learning experiment by David Li in collaboration with Google Arts & Culture", allowing users to "[create their] own opera inspired song with Blob Opera - no music skills required".
It's definitely worth playing with the Blob Opera before using this tool. The musical toy can be controlled by dragging the blobs with your mouse or, if you have one connected, a MIDI input device like a keyboard or sequencer. Both of these methods have limitations. Mouse control is imprecise and only affects one voice at a time with no manual harmony. MIDI control can be used to play the blobs like an organ but cannot control individual voices as a range of absolute MIDI pitches is assigned to each blob, making complex arrangements difficult. To enable the blobs to sing arbitrary choral (SATB) arrangements I wrote a tool that converts multitrack MIDI files into the file format used by the blobs to play included example songs, and found a method to cause the blobs to load my file instead of the expected example file.
Here are some of my results (Twitter video links):
$ npm install -g blob-opera-midi
If your MIDI file is already exactly 4 tracks in SATB order:
$ blob-opera-midi song.mid
<filename>.mid.json
If your MIDI is not in track order or you want to preview the track assignments:
$ blob-opera-midi song.mid -i
or $ blob-opera-midi song.mid --interactive
esc
or q
to exit. Alternatively, you can use the ctrl-e
hotkey to export and immediately exit.<filename>.mid.json
.Other command-line flags:
-r
or --random
to add slight timing drift (may provide more naturalistic sound). By default, no drift is added.-f
or --free-pitch
to allow notes outside of the comfortable range of the blobs. By default, pitches are clamped between 48 and 70, although the actual note produced by a blob may be in a different octave depending on range.-c
or --christmas
to make the blobs wear santa hats (no effect when using Method 2 to load a song).app.js
) and add a breakpoint on the first line of this function:
t.prototype.finishRecording = function() {
if (!this.isRecording) // ADD THE BREAKPOINT HERE
return null;
var t = this.currentRecording;
return function(t, e) {
for (var n = 1 / 0, i = 0, r = t.parts; i < r.length; i++) {
(l = r[i]).notes.length > 0 && (n = Math.min(n, l.notes[0].timeSeconds))
}
for (var o = e - n, a = 0, s = t.parts; a < s.length; a++)
for (var l, u = 0, c = (l = s[a]).notes; u < c.length; u++) {
c[u].timeSeconds += o
}
}(t, .2),
this.isRecording = !1,
this.currentRecording = null,
t
}
this.currentRecording = <contents>
where <contents>
should be replaced by the contents of the JSON file generated previously.e.opera.enterPlaying(n)
)n = <contents>
where <contents>
is the contents of your JSON file.You can also save songs sideloaded with this method to a shareable URL by encoding it using te.RecordingMessage.encode(<JSON>).finish()
at line 9428 of the main app file:
Yes.
Yes, but that's not interesting to me and this has advantages over that like temporally consistent vocalizations without having to do any manual programming.
Probably. Submit a github issue!