RT-WDF / rt-wdf_renderer

RT-WDF - Circuit Renderer Application
24 stars 6 forks source link

Add optional realtime rendering support #1

Open m-rest opened 7 years ago

m-rest commented 7 years ago

Add an option to switch to realtime WDF rendering. Maybe only enable this option after an offline benchmark test before?

maxprod2016 commented 7 years ago

PART 1 - The AudioCallback static class

First all all, you need to have a generic AudioCallback class, a singleton or an external pointer that is initialized and released in the main class of the project :

void initialise (const String& commandLine) override
{
    audioCallback = new AudioCallback();
    mainWindow = new MainWindow (getApplicationName());
}

void shutdown() override
{
    mainWindow = nullptr;
    if (audioCallback != nullptr)
        delete audioCallback;
}

The AudioCallback is derivated from the Juce's base class AudioIODeviceCallback.

There's is three abstract methods to override :

The class need to embedd the Juce AudioDeviceManager class that is the root for the audio system initialization. Just call the .initialise method of the AudioDeviceManager in the constructor of your AudioCallback class.

There's is 6 parameters :

For example, if you want to select the buffer size or the sample rate of the audio callback, you can use the AudioDeviceSetup and pass it to the .initilise method :

AudioDeviceManager::AudioDeviceSetup setup;
setup.sampleRate = 44100;
setup.bufferSize = 8192;

const String error = audioDeviceManager.initialise ( [...] , &setup);

Now, you need to add your callback to the AudioDeviceManager, if the initialise return no error, just add one line in your constructor following the initialization : audioDeviceManager.addAudioCallback (this);

If the input source selected is a audio source like a microphone, the callback will basically produce an output of the microphone. You'll need to have a source player in the main callback :

void AudioCallback::audioDeviceAboutToStart (AudioIODevice *device)
{
    audioSourcePlayer.audioDeviceAboutToStart (device);
}

void AudioCallback::audioDeviceIOCallback (const float **inputChannelData,
                       int numInputChannels, 
                       float **outputChannelData, 
                       int numOutputChannels, 
                       int numSamples)
{
    audioSourcePlayer.audioDeviceIOCallback (inputChannelData,
                         numInputChannels,
                         outputChannelData,
                         numOutputChannels,
                         numSamples);
}

void AudioCallback::audioDeviceStopped ()
{
    audioSourcePlayer.audioDeviceStopped ();
}

So you need to add a AudioSourcePlayer audioSourcePlayer; in you AudioCallback class members.

PART 2 - Load and play audio file stream

In order to play audio file and modifying the signal in real-time, your AudioCallback class need two other Juce's audio classes :

AudioTransportSource transportSource; 
MixerAudioSource mixerSource;

The first one, the Transport Source, allow to have a 'transport' (start/stop ...) control on the audio file you want to stream. The second, the Mixer Source, allow you to connect the transport with the source player.

In the AudioCallback constructor, you'll need to wire the differents classes like that :

mixerSource.addInputSource (&transportSource, false);
audioSourcePlayer.setSource (&mixerSource);
audioDeviceManager.addAudioCallback (this);

By symmetry, the destructor of the AudioCallback will be :

audioDeviceManager.removeAudioCallback (this);
audioSourcePlayer.setSource (nullptr);
transportSource.setSource (nullptr);

Now, your class are able to load a audio file and play it through the transport class, so add a new public method in your AudioCallback, let's say void openAudioFile (File audioFile) and implement it :

void AudioCallback::openAudioFile (File audioFile)
{
    transportSource.setSource (nullptr);

    AudioFormatManager formatManager;
    formatManager.registerBasicFormats();

    AudioFormatReader* reader = formatManager.createReaderFor (audioFile);
    if (reader != nullptr)
    {
        currentAudioFileSource = new AudioFormatReaderSource (reader, true);
        transportSource.setSource (currentAudioFileSource, // PositionableAudioSource
                   0, // readAheadBufferSize
                   nullptr, // readAheadThread
                   reader->sampleRate, // sourceSampleRateToCorrectFor
                   reader->numChannels); // maxNumChannels
        transportSource.start ();
    }
}

The transportSource.start (); is not mandatory here, if you don't want that the audio file start to play automaticaly after loading, remove this line. In order to keep a pointer to the current audio file, just add a member to your class : ScopedPointer<AudioFormatReaderSource> currentAudioFileSource;

Your application GUI need to have control on the audio transport, because your AudioCallback is a static pointer (singleton or extern pointer), add a public accessor to your Transport class.

AudioTransportSource* getActiveTransport () { return &transportSource; } 

PART 3 - Process/modify the audio stream in real-time

Now, your AudioCallback class is able to play differents sources, but do nothing more. If you want to alter the signal by a process (in our case, a WDF circuit), you need to create a new class that inherit from the Juce's AudioProcessor.

The AudioProcessor is basically the Juce base class for all the plugins generated by is wrapper (VST ...), the AudioProcessor have a lot of class methods / accessors (I see right now that a concept of Bus is add recently to the Juce documentation for the AudioProcessor class) but basically a derivated class need to override only few methods :

class WDFProcessor : public AudioProcessor
{
    public:
        WDFProcessor ();
        //----------------------------------------------------------------------
        const String getName () const override;
        //----------------------------------------------------------------------
        void prepareToPlay (double sampleRate, int maximumExpectedSamplesPerBlock) override;
        void releaseResources () override;
        void processBlock (AudioSampleBuffer& buffer, MidiBuffer& midiMessages) override;
        //----------------------------------------------------------------------
        double getTailLengthSeconds () const override;
        //----------------------------------------------------------------------
        bool acceptsMidi () const override;
        bool producesMidi () const override;
        //----------------------------------------------------------------------
        AudioProcessorEditor* createEditor () override;
        bool hasEditor () const override;
        //----------------------------------------------------------------------
        int getNumPrograms () override;
        int getCurrentProgram () override;
        void setCurrentProgram (int index) override;
        const String getProgramName (int index) override;
        void changeProgramName (int index, const String& newname) override;
        //----------------------------------------------------------------------
        void getStateInformation (MemoryBlock& destData) override;
        void setStateInformation (const void* data, int sizeInBytes) override;
        //----------------------------------------------------------------------
};

Ofcourse, the three main methods to initialize and produce audio stream are :

void prepareToPlay (double sampleRate, int maximumExpectedSamplesPerBlock) override;
void releaseResources () override;
void processBlock (AudioSampleBuffer& buffer, MidiBuffer& midiMessages) override;

The processBlock provide a multichannel buffer of size defined by the callback settings. The MidiBuffer is used in the case of a VSTi (VST instrument) or similar thing. For example, by allowing midi input in your processor (bool acceptsMidi () const override;), your are able to drive your process with some midi informations.

You can also add a GUI to your processor by create a new 'Editor' class that inherit from the AudioProcessorEditor Juce's class and return a pointer with the AudioProcessor createEditor method.

class WDFEditor : public AudioProcessorEditor
{
    public:
        WDFEditor (AudioProcessor* processor);
        //----------------------------------------------------------------------
        void resized () override;
        void paint (Graphics& g) override;
        //----------------------------------------------------------------------
};

Now, how to connect a AudioProcessor to your AudioCallback stream ?

It's very simple, you just need to add a new AudioProcessorPlayer member class to your callback class : AudioProcessorPlayer processorPlayer;

Your callback method becomes :

void AudioCallback::audioDeviceAboutToStart (AudioIODevice *device)
{
    audioSourcePlayer.audioDeviceAboutToStart (device);
    processorPlayer.audioDeviceAboutToStart (device);
}

void AudioCallback::audioDeviceIOCallback (const float **inputChannelData,
                       int numInputChannels, 
                       float **outputChannelData, 
                       int numOutputChannels, 
                       int numSamples)
{
    if (processorPlayer.getCurrentProcessor() != nullptr)
        processorPlayer.audioDeviceIOCallback (inputChannelData,
                           numInputChannels,
                           outputChannelData,
                           numOutputChannels,
                           numSamples);

    audioSourcePlayer.audioDeviceIOCallback (inputChannelData,
                         numInputChannels,
                         outputChannelData,
                         numOutputChannels,
                         numSamples);
}

void AudioCallback::audioDeviceStopped ()
{
    audioSourcePlayer.audioDeviceStopped ();
    processorPlayer.audioDeviceStopped ();
}

In the AudioCallback constructor (or where you want to set the current processor depending of your application strategy), just add two lines to connect your AudioProcessor class (eg. WDFProcessor) to the AudioProcessorPlayer :

currentProcessor = new WDFProcessor();
processorPlayer.setProcessor (currentProcessor);

Ofcourse, currentProcessor is a audio callback class member :

ScopedPointer<AudioProcessor> currentProcessor;

Now your audio stream (depending of the source : microphone / audio file) will be processed by the code of your AudioProcessor.

That's all folks. Hope that help, questions or suggestions are welcome.