Open kasravi opened 2 years ago
Hey @kasravi ! We do support human voice - (check out our about page) - but not every recording will work perfectly out of the box. On the website, you can try clicking on the MIDI adjustments menu below the transcription, which you can use to get a better transcription. For a quick demo of how the sliders work, you can see this demo here.
Hey @rabitt , thanks for the reply. I intentionally used a sound that is relatively easy to detect pitches just by analyzing its FFT. From your reply, I understand that for each transcription you need to fine-tune the result manually. So this model is not meant to be used in an unsupervised environment?
I am missing an option to have only a single note at a time. I could imagine that the results might be better when the model knows that there is no polyphony, e.g. for human vocals of a single person.
I have also found that Basic Pitch is great at detecting piano notes – played alone or simultaneously – but not as accurate for human voice (humming or whistling into a microphone). It seems to throw in unexpected staccato notes and high notes that weren't sung. I would love for this to be improved! I like the suggestion above of a non-polyphonic setting that maybe could improve accuracy for the use case of humming a tune. Maybe a setting to reduce pitch bend detection (round to the nearest note more aggressively) would also help.
In case you are interested in pitch detection of human voice only:
The paper on Basic Pitch by Spotify is interesting and provides good pointers towards other automatic music transcription (AMT) systems.
From the conclusion:
NMP (i.e. the model behind basic pitch) achieves state-of-the-art results on GuitarSet. It however did not outperform the instrument-specific models for piano and vocals.
The vocals comparison was done with Vocano.
Vocano [9] is a monophonic vocal transcription method which first performs vocal source separation, then applies a pre-trained pitch extractor followed by a note segmentation neural network, trained on solo vocal data.
[9] J.-Y. Hsu and L. Su, “VOCANO: A note transcription frame-work for singing voice in polyphonic music,” in Proc. ISMIR, 2021
The VOCANO paper used Patch-CNN for pitch detection.
In case you are interested in pitch detection of human voice only:
CREPE is another "monophonic pitch tracker".
A comparison of Basic Pitch, CREPE and others with respect to human voice pitch detection would be nice ( see also https://github.com/spotify/basic-pitch/issues/45 )
I haven't seen any notion that human voice pitch detection is covered or not but since the claim is to be instrument agnostic I thought it is better to give you a repro. If this is out of context please close and sorry for the inconvenience.
I used the online version at https://basicpitch.spotify.com/ and fed it with a clean noiseless voice from https://samplefocus.com/samples/solo-voice-aah-solo. the result is far from being usable:
Is there a plan to support fundamental frequency detection?