An iOS and macOS audio visualization framework built upon Core Audio useful for anyone doing real-time, low-latency audio processing and visualizations.
I've subclassed EZAudioPlotGL in my application to enable GL_BLEND with an appropriate blend function, so that when you configure the plot with colors containing alpha values, they render as expected. Here's what my subclass implementation looks like:
I've subclassed EZAudioPlotGL in my application to enable
GL_BLEND
with an appropriate blend function, so that when you configure the plot with colors containing alpha values, they render as expected. Here's what my subclass implementation looks like:Resulting waveform (notice that I'm also re-rendering part of the waveform with a higher alpha value as well, to indicate current playback progress):
Is there any reason why this couldn't be the default behavior?