One question though: do we really want to show spectrograms to annotators? I've heard concerns voiced in the past that this additional stimulus can bias annotators toward certain types of behaviors, so maybe we should be careful about that? We can maybe pick that up in a side issue.
My $0.02:
Our main reason for including spectrograms in Gracenote's annotation tool was to have an additional mode that can help facilitate and expedite the annotation process. However, since we are dealing with a different task, showing a spectrogram might be less helpful and may introduce bias as you said. I think it'd still be good to be open to adding additional visualizations to help make the annotation process faster/better if possible.
@bmcfee do you have any references that discuss the issue?
For the task we are dealing with, I don't see the use of a spectrogram or any other visualization. I think the most important aspect is that users know the sounds they have to look for, so I'd focus on how we present the taxonomy and how we train users.
For presenting the taxonomy, I'd try both: the hierarchy and some search mechanism (maybe highlight the tree nodes that match the search terms). Should also be able to play an example of each instrument.
Spinning off an issue from #8:
My $0.02:
Our main reason for including spectrograms in Gracenote's annotation tool was to have an additional mode that can help facilitate and expedite the annotation process. However, since we are dealing with a different task, showing a spectrogram might be less helpful and may introduce bias as you said. I think it'd still be good to be open to adding additional visualizations to help make the annotation process faster/better if possible.
@bmcfee do you have any references that discuss the issue?