Closed agreyfield91 closed 5 years ago
I think the most complete/accurate list right now is in the protobuf itself: https://github.com/tensorflow/magenta-js/blob/master/music/src/protobuf/proto.d.ts
Thank you very much! I tried experimenting around with the properties of a notesequence created from OnsetsAndFrames() audio transcription, but only .totalTime returned a value other than either nothing or an empty array. Are there currently any methods/models to "estimate" some of these properties from a transcribed notesequence?
What kind of properties are you looking for? I think onsets and frames will return a non-quantized sequence, so it should have a startTime/endTime for every note. If you need to quantize it, there’s a helper for that too.
I would ideally be able to get rhythm information such as qpm or time signature from the transcribed sequence. Would quantizing the sequence allow me to access more information?
We don't actually infer qpm or time signature with onsets and frames since we don't know it in our ground truth data.
Note that there are some off-the-shelf methods for estimating tempo, however. For example: https://librosa.github.io/librosa/generated/librosa.beat.tempo.html
Looking at docs, I can't find a way to access the properties of a notesequence, like time signature or bpm. Is there a way to access this, or any other properties? Sorry if this is the wrong place to ask.
Thank you!