or a function to resolve media (so we'd need to know about start and end offsets)
we require annotation bounds: so we render the window for the annotation, and assume URL is the context of an appropriate length
We leave this up to the implementor
not a solved a problem for stand alone
implementor would provide a button/link that has whatever behaviour they want
e.g. Ecosounds provides a context link in the template, that pops a dialog with a larger spectrogram when clicked. It can do that because it a lot more knowledge the source audio
We're probably leaning towards option 2. It keeps the verificiation-grid simple and let's implementors create any solution
Users have regularly expressed the desire to see the context from where an audio segment was generated (essentially zooming out a spectrogram).
We might be able to support this through a
contextUrl
column (in the input dataset) or similar