Music-and-Culture-Technology-Lab / omnizart

Omniscient Mozart, being able to transcribe everything in the music, including vocal, drum, chord, beat, instruments, and more.
https://music-and-culture-technology-lab.github.io/omnizart-doc/
MIT License
1.65k stars 102 forks source link

Documentation - specify model for each function exactly #41

Open keunwoochoi opened 3 years ago

keunwoochoi commented 3 years ago

(https://github.com/openjournals/joss-reviews/issues/3391)

For developers who are not MIR researchers, it'd be much clearer if the used model is specified in each API (e.g., https://music-and-culture-technology-lab.github.io/omnizart-doc/music/cli.html) instead of only in the main doc (https://music-and-culture-technology-lab.github.io/omnizart-doc/). And that's because,

musical notes of instruments [WCS20], chord progression [CS19], drum events [WWS20], frame-level vocal melody [LS18], note-level vocal melody [HS20], and beat [CS20].

this description might not be 100% clear for non-researchers.

keunwoochoi commented 3 years ago

To clarify; just realized they are specified in the APIs (https://music-and-culture-technology-lab.github.io/omnizart-doc/music/api.html). Maybe specifying which API is being used for each command line interface would do the job.

BreezeWhite commented 3 years ago

Maybe I can add a reference link to the API page if someone wants to know more information about the model.