We need to efficiently display waveforms on the timeline.
I want this to be independent of Vizia (by that I mean that we shouldn't create a "waveform" Vizia view). Instead I want this to be some kind of method that takes a range of samples, a rectanglular area of pixels, and a zoom factor as input, and then outputs the waveform view image.
Preferably the drawing should be GPU-accelerated, but we can additionally have CPU-based rendering as a fallback and for testing purposes. We could use femtovg for rendering, but I imagine we can get much better performance with a custom shader. I also don't think that anti-aliasing is necessary, but we could look into it if it doesn't hurt performance much.
That being said, I'm not sure that the actual calculation of the peaks is better done on the CPU or the GPU, especially when zoomed very far out on a long audio clip. We need to look into that.
We could look into mipmapping to improve performance at various zoom levels, which could be especially important when zoomed very far out on a long audio clip. There has already been some progress here in the (audio-waveform-mipmap)[https://github.com/MeadowlarkDAW/audio-waveform-mipmap] repo. I have no idea how it works, so ask ollpu.
There is also some added complexity in that very long audio clips will be streamed from disk, so keep that in mind.
If possible, I would also like the feature where when zoomed in far enough, the rendering switches to drawing a line between individual samples like this in Audacity.
Also on the topic of Audacity, we could look into possibly adding in an "RMS" portion on the waveform view in addition to the peak values. But if this hurts performance too much, then I don't think we need it.
We need to efficiently display waveforms on the timeline.
I want this to be independent of Vizia (by that I mean that we shouldn't create a "waveform" Vizia view). Instead I want this to be some kind of method that takes a range of samples, a rectanglular area of pixels, and a zoom factor as input, and then outputs the waveform view image.
Preferably the drawing should be GPU-accelerated, but we can additionally have CPU-based rendering as a fallback and for testing purposes. We could use
femtovg
for rendering, but I imagine we can get much better performance with a custom shader. I also don't think that anti-aliasing is necessary, but we could look into it if it doesn't hurt performance much.That being said, I'm not sure that the actual calculation of the peaks is better done on the CPU or the GPU, especially when zoomed very far out on a long audio clip. We need to look into that.
We could look into mipmapping to improve performance at various zoom levels, which could be especially important when zoomed very far out on a long audio clip. There has already been some progress here in the (audio-waveform-mipmap)[https://github.com/MeadowlarkDAW/audio-waveform-mipmap] repo. I have no idea how it works, so ask ollpu.
There is also some added complexity in that very long audio clips will be streamed from disk, so keep that in mind.
If possible, I would also like the feature where when zoomed in far enough, the rendering switches to drawing a line between individual samples like this in Audacity.
Also on the topic of Audacity, we could look into possibly adding in an "RMS" portion on the waveform view in addition to the peak values. But if this hurts performance too much, then I don't think we need it.