An iOS and macOS audio visualization framework built upon Core Audio useful for anyone doing real-time, low-latency audio processing and visualizations.
On interfaces that contain more then 1 stream, this method would fail. Reason being because you needed to get the size of the buffer, but originally it was defaulted to 0 and thus failed.
Also renamed a couple variables to make it more readable.
On interfaces that contain more then 1 stream, this method would fail. Reason being because you needed to get the size of the buffer, but originally it was defaulted to 0 and thus failed.
Also renamed a couple variables to make it more readable.
Let me know what you think.