facebookresearch / sound-spaces

A first-of-its-kind acoustic simulation platform for audio-visual embodied AI research. It supports training and evaluating multiple tasks and applications.
https://soundspaces.org
Creative Commons Attribution 4.0 International
357 stars 57 forks source link

How can i get RIRs #84

Open JackeyPu opened 2 years ago

JackeyPu commented 2 years ago

Hi,thanks share this with us.May i ask how the RIRs form?Expect downloading directly, what can i get it by other ways?

ChanganVR commented 2 years ago

Hi @JackeyPu to use SoundSpaces 1.0, you need to download the pre-rendered impulse response and to use SoundSpaces 2.0, you just need to install the repo correctly and then you could use the simulation to render on the fly.

JackeyPu commented 2 years ago

Thanks for your reply,another question:if i ues SoundSpaces 2.0,which py files can I modify to select the sound source location?If it is convenient for you, can you tell me the exact location of the code

ChanganVR commented 2 years ago

Hi @JackeyPu I'm not sure what the downstream task you have in mind, but the minimal example for manipulating the sound source can be found here: https://github.com/facebookresearch/sound-spaces/blob/fb68e410a4a1388e2d63279e6b92b6f082371fec/PanoIR/render_panoIR.py#L124-L137

If you let me know you target application, I could advise further.

JackeyPu commented 2 years ago

thanks very much,I am confused how to add anther sound source .That is to say by using two mono sound source to finish task that done by only one before. so my task is by using two mono sound source to convole with RIRs and get a binaural sound finally.

ChanganVR commented 2 years ago

Acoustics perception is additive. To compute the sounds for two sources, you can simply compute the sound for them separately and then add them together.

sreeharshaparuchur1 commented 1 year ago

@ChanganVR , so would it be correct to simulate the properties of multiple sound sources as follows:

Is the above process correct? i.e. will constructive and destructive interference between sound sources be modeled accurately? I have tried this and the resulting IR map is similar to if two sound sources were placed in the environment independently of each other. Do you have any insight as to why that might be the case? Below is an example of the same where the direction of the black arrow indicates the orientation of the binaural microphone at every grid point in the scene: image

I have tried setting multiple audioSourceTransforms to a single audio sensor in multiple ways, none of which worked so I may have been doing something wrong and would seek your guidance on the right way to add multiple audio sources via this method, if possible.

Thanks in advance!

ChanganVR commented 1 year ago

@sreeharshaparuchur1 assume you have two sound sources A and B as well as a receiver C, the sound received at C is simply RIR(A, C) sound A + RIR(B, C) sound B.

In order to simulate these two sounds, you could set the sound source separately and render for these locations sequentially, which is equivalent to moving the sound sources around in the environment.

is it not possible to create two agents in the environment, bind them to two different audio sources and then add the IRs as acoustics perception is additive?

To achieve this, basically, you need to set the source twice and the receiver twice to render the IRs for these four location combinations and add the sounds for the two receiver locations separately.