sfstoolbox / theory

Theory of sound field synthesis
https://sfs.readthedocs.io
Creative Commons Attribution 4.0 International
14 stars 3 forks source link

Usage of SFS Toolbox with Loudspeaker Arrays #38

Open shirley543 opened 5 years ago

shirley543 commented 5 years ago

Hi, incredibly new to sound field synthesis. Was just wondering if SFSToolbox could be used with real-life loudspeakers, in order to reproduce impulse responses? What we're wanting is to set up an array of 16 loudspeakers in an anechoic chamber, and be able to simulate various reverberation levels/ room sizes, before playing back the response using the loudspeaker array. Can SFSToolbox be used in this way, ie. create the simulated sound, then be split into 16 channels? Thank you.

hagenw commented 5 years ago

Hi, cool that you are starting to work on sound field synthesis.

Regarding your question there are different answers, depending on what exactly you want to achieve. If you want to setup a loudspeaker array and directly drive those loudspeaker by a sound field synthesis method like WFS then the SoundScape Renderer is probably what you are looking for: http://spatialaudio.net/ssr/ It has a GUI and you can move source in real time.

If you are more interested in acoustic research questions you might want to create binaural simulations by first measuring the impulse responses of your loudspeakers (e.g. in different room settings). For a start have a look at http://matlab.sfstoolbox.org/en/2.5.0/binaural-simulations/ (at the moment this is only available in the Matlab/Octave version of the Toolbox). In addition, you might use the Python or Matlab/Octave version of the Toolbox to pre-calculate driving signals for a static source position and them into the loudspeakers.

Feel free to ask for more details if you struggling to get started.

shirley543 commented 5 years ago

Hi again, thanks heaps for the quick response! I want to be able to drive a loudspeaker array while moving the source, and simulate different room types (sizes, reverberation time, etc) to affect how the source sounds (eg. make it sound as if it were reverberating). Would the binaural simulations in SFSToolbox be able to achieve this? Or would the SoundScape Renderer be more suited? Sorry for all the questions, still trying to understand everything. Thank you

fs446 commented 5 years ago

I'd probably start with the IEM plugin suite. An Ambisonics signal chain of https://plugins.iem.at/docs/plugindescriptions/#directivityshaper https://plugins.iem.at/docs/plugindescriptions/#roomencoder https://plugins.iem.at/docs/plugindescriptions/#binauraldecoder seems exactly what you need in your project.

shirley543 commented 5 years ago

Thanks, will definitely check it out! Just to check my understanding, binaural simulations are usually only played back on headphones, hence usage of SFSToolbox is more for simulating different loudspeaker arrays and playing back the sound through headphones. In the case where a real life loudspeaker array is to be used to playback the sound, the IEM plugins may be more suitable? Would I still need the binaural decoder in that case? Thank you for all the help

fs446 commented 5 years ago

Ok, I try to explain the IEM stuff for the FX chain in a Reaper project for 7th order Ambisonics. 1.) You'd need a track with 64 Track channels (maximum possible in Reaper at the moment) to work with 7th order Ambisonics. 2.) Put a mono signal in there which you want to auralize with room information. 3.) 1st FX: IEM Directivity Shaper (encodes the mono signal to Ambisonics domain by applying a desired directivity of a virtual source), if you want to have omni/point source then leave it at order 0 4.) 2nd FX: put this virtual source into a room with the IEM Room Encoder, which calculates early reflections by image source model. Make sure Room Encoder receives 7th order Ambisonics (left top) and sends 7th order Ambisonics (right top) 5.) 3rd FX: for the late reverberation a good reverb plugin is required, ideally one that handles all Ambisonics signals altogether. You might use the IEM FdNReverb for this. Room Size of the early and diffuse part must be matched manually at the moment. 6.) Once arrived here with a convincing result you have two fundamental options, recap: you are still dealing with a 7th order Ambisonics bus 6a.) you can decode Ambisonics to a loudspeaker setup using the IEM AllRAD Decoder Plugin, then the setup dictates the most suitable/meaningful Ambisonics order The you render the virtual source in the virtual room (early reflections, diffuse reverb) onto the loudspeakers. 6b.) that's what you was first asking for: you can decode to binaural playback. The IEM Binaural Decoder does this in a fancy new way, directly grabbing the binaural information out of the Ambisonics signals mapping to a Neumann KH100 artificial head. Just insert this as the last FX in the chain and route to a L/R track or hardware out.

If you give this a try, there are other very useful plugins, such as Sparta/Compass from Aalto http://research.spa.aalto.fi/projects/sparta_vsts/

Thanks, will definitely check it out! Just to check my understanding, binaural simulations are usually only played back on headphones, hence usage of SFSToolbox is more for simulating different loudspeaker arrays and playing back the sound through headphones.

Simulating loudspeaker arrays could be tricky, when the directivity of the array you want to simulate cannot be adapted to an Ambisonics directivity within the directivity shaper plugin, or if you even want to control loudspeakers of the array individually. Then, every single loudspeaker within the array must be assigned to its own Room Encoder. For very large arrays this might consume lot of CPU.

In the case where a real life loudspeaker array is to be used to playback the sound, the IEM plugins may be more suitable?

IEM plugins are perfectly suited for Ambisonics based rendering on almost arbitrary loudspeaker setups. SSR can handle only circular Ambisonics arrays at the moment. Matlab SFStoolbox has almost everything you need for similar simulations like the above mentioned IEM plugin workflow, then with some offline pre-rendering involved. I guess you additionally would need good reverb FX then, since only image source model is available at the moment.

Would I still need the binaural decoder in that case?

Only if you want to check the results in an auralization.

Thank you for all the help

You're very welcome!