Open Andrew123098 opened 2 years ago
Hi.
To be able to save the output, you'll need to use a loopback device, using the ALC_SOFT_loopback extension. Essentially you need to use alcLoopbackOpenDeviceSOFT
instead of the normal alcOpenDevice
, make sure to specify the format and HRTF attributes to alcCreateContext
, then you can use alcRenderSamplesSOFT
to render samples from any playing sources and get the result, which you can then give to some other API like libsndfile
to encode and write it to disk.
Be aware that there may be a bit of latency in the playback, both as a result of various render options that may be set and because of the HRTF filtering itself.has delayed output by design. So you may get some silent samples after the source starts, and some audible samples after if stops.
PS: I soon have a deadline for this program to be finished, and while I hope that fixing this problem allows me to complete the program, I know there are probably other things that may go wrong or require additional help. Would you or anyone else you know with expertise in this API be willing to do a consultation? I would happily compensate you for your time.
Sure, feel free to ask here. If it gets too offtopic for the library, it can be continued over email.
Thank you so much for your quick response. I will try to implement this approach and let you know if I have any questions.
I have been looking around for a bit, and I cannot figure out which file(s) to include to get the alcLoopbackOpenDeviceSOFT() function to be accessible. It seems that alext.h or alc.h should work but they do not. (Also I thought I might try using the alcIsExtensionSupported() function but it is not recognized either, though this is not completely necessary).
It's an extension, so it needs to be loaded dynamically.
static LPALCLOOPBACKOPENDEVICESOFT alcLoopbackOpenDeviceSOFT;
static LPALCRENDERSAMPLESSOFT alcRenderSamplesSOFT;
int main()
{
...
if(!alcIsExtensionSupported(NULL, "ALC_SOFT_loopback"))
return 1;
alcLoopbackOpenDeviceSOFT = (LPALCLOOPBACKOPENDEVICESOFT)alcGetProcAddress(NULL, "alcLoopbackOpenDeviceSOFT");
alcRenderSamplesSOFT = (LPALCRENDERSAMPLESSOFT)alcGetProcAddress(NULL, "alcRenderSamplesSOFT");
...
ALCdevice* loopbackDevice = alcLoopbackOpenDeviceSOFT(NULL);
...
This may be a bit more involved question. I am trying to set up the loopback device and renderer properly, but when I use alcRenderSamplesSOFT, the program exits. I cannot tell if this is due to an improperly set up loopback device, input buffer, or output buffer (or something else). Though one thing that confuses me is the use of a void as the datatype of the second argument in alcRenderSamplesSOFT (I am not sure I am using it properly). Let me know what you think maybe causing issues. All my code is at the following github link, and the issue occurs at line 44 of main.cpp.
https://github.com/Andrew123098/ESI2022/tree/master/openal-impl
std::vector<ALuint> data = SoundBuffer::get()->p_SoundEffectBuffers;
ALCvoid* buffer = &data;
alcRenderSamplesSOFT(mysounddevice->p_ALCDevice, buffer, 1024);
You're making it write over the data
variable itself, which is on the stack, and telling it to write 1024 sample frames over a 12 byte struct.
After opening the loopback device, you need to create a context and make sure to specify the format you want in the attributes:
/* For stereo 32-bit float samples, 48khz, with HRTF enabled. */
ALCint attrs[] = {
ALC_FORMAT_CHANNELS_SOFT, ALC_STEREO_SOFT,
ALC_FORMAT_TYPE_SOFT, ALC_FLOAT_SOFT,
ALC_FREQUENCY, 48000
ALC_HRTF_SOFT, ALC_TRUE,
0
};
ALCcontext *context = alcCreateContext(mysounddevice->p_ALCDevice, attrs);
alcMakeContextCurrent(context);
Then when you want to render some samples, you need to allocate memory to store them:
std::vector<float> samples; // we're rendering floats
samples.resize(1024 * 2); // 1024 stereo sample frames
Then render:
alcRenderSamplesSOFT(mysounddevice->p_ALCDevice, samples.data(), 1024);
And there's now 1024 sample frames in the samples
vector, mixed from whatever sources were set to playing at the time of the call. You would then pass the samples to the encoder and write it to disk. Then you keep rendering more samples and encoding/writing them to disk, as you make changes to the sources, if any.
Great! These changes make sense and I implemented them. It seems I have now run into an additional problem where my sound never stops playing. Or, at least, the while loop in the play function in soundSource.cpp never ends. I tried commenting out lines 31-38, because I do not think the while loop is even necessary, but regardless, the data vector where I am trying to save the recorded data is empty. Is there something wrong with the way I am recording the sound?
If you don't call alcRenderSamplesSOFT
, the source state won't progress and you won't get new samples. The while loop needs to be something like this:
while (state == AL_PLAYING && alGetError() == AL_NO_ERROR)
{
std::cout << "currently playing sound\n";
alcRenderSamplesSOFT(mysounddevice->p_ALCDevice, samples.data(), 1024);
... encode/write the 1024 more sample frames to disk here ...
alGetSourcei(p_Source, AL_SOURCE_STATE, &state);
}
Note that this will never stop if the source is looping, and will eventually fill up the disk or whatever storage it's writing to. Otherwise, it will render, encode, and write the audio samples in 1024 sample frame chunks, then stop when the source stops mixing.
Hello @kcat, I was hoping you might be able to shed some light on if you think I am recording the directional sound properly. I have managed to set up the program mostly correctly (I think) but I am running into an issue where the resulting .WAV file is empty (it has some symbols in the raw form but it cannot be played because the recording is 0 seconds long) (see the picture below for the raw file). So, I am unsure if the problem is because I am recording the sound incorrectly using alcRenderSamplesSOFT or if I am just making and exporting the .WAV file incorrectly (or both :). Let me know what you think. I really appreciate your helping me with this. Again the code can be found at https://github.com/Andrew123098/ESI2022/tree/master/openal-impl and the particular problem is likely occurring at or near main.cpp lines 48-60. The create_file function is defined in makeFile.cpp. Best, Andrew
It looks like you're calling makeFile::create_file
in the loop when rendering samples, so it keeps recreating the file and overwriting what it had. Also, with memset(buffer, 0, sizeof(buffer));
you're clearing the first 4 or 8 bytes to 0, setting the first few samples to 0 each time, which probably isn't what you intended to do.
That does make sense, but regardless of where I place the makeFile::create_file
function, the resulting .WAV file is 0 seconds long. To me, this leaves the issue with maybe being the recording device setup or an issue with the number of audio samples. Would it make more sense to record the entire audio sample with alcRenderSamplesSOFT at once instead of in chunks?
Edit: I think part of the problem may be that I am not creating enough space. Sample is a 1024x2 array but the audio is at 44100 samples per second. So by chunking it into 0.023-second chunks, a 3 second .WAV file will fill up that space immediately and show up as 0 seconds long. I think I may need to create an array of 1024x2 arrays and iterate through them in my while loop. After, I can concatenate them all and write to file. Let me know if you think this makes sense.
That does make sense, but regardless of where I place the makeFile::create_file function, the resulting .WAV file is 0 seconds long.
You need to split it up. You would have create_file just create/open the file for writing, but not write anything, and hold on to the file to keep it open. Then in the loop, you keep rendering samples and writing to that open file. Then when you're done rendering, you close the file to finish it. Everything you write in between opening and closing the file will be concatenated on disk, so as long as you don't close the file until you're done rendering, there's no need to store it all in memory first.
This makes sense. I will try to implement this approach.
@kcat I think I got it to work!! Thank you so much for all your help! Enjoy your Christmas, and if you could, let me know how I can send you compensation for all your help. I can do Zelle, Venmo, Paypal, anything really. Just let me know :)
Merry Christmas, Andrew Brown
Hey Muthu, It has been a while since I worked on this project, but I am happy to share my code with you. The repo contains a tool that converts a directory of audio files from monaural to binaural directional by giving the program directional inputs. All of the code can be found https://github.com/Andrew123098/ESI2022. Let me know if you have any questions about the code and I am happy to help.
On Sun, Oct 16, 2022 at 12:05 AM neo_numerics @.***> wrote:
Hi guys, I developed a certain framework so as to save multiple audio files of different listener-source positions to stream on voice chat application. I am in a similar deadline situation as @Andrew123098 https://github.com/Andrew123098 that I got very less time to develop code to save wav files. I tried the above suggestions but not able to see the bigger picture. I would be grateful if you post code clippings for the file_write part. Thanks a lot. --Muthu
— Reply to this email directly, view it on GitHub https://github.com/kcat/openal-soft/issues/627#issuecomment-1279908067, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMFNJEXMSVUBFSJH274JGY3WDOSLDANCNFSM5KOV2TGQ . You are receiving this because you were mentioned.Message ID: @.***>
-- Andrew M. Brown @.*** www.linkedin.com/in/andrewbrown1230 (310) 944-8813
Hi Andrew,
THanks a lot for your kind offer. The link to your project has been greatly helpful in developing what I wanted. Keep the good work going.
Best Regards, Muthu
From: Andrew Brown @.> Sent: Sunday, October 16, 2022 11:42 PM To: kcat/openal-soft @.> Cc: neo_numerics @.>; Comment @.> Subject: Re: [kcat/openal-soft] Directional Sound File Conversion (Issue #627)
Hey Muthu, It has been a while since I worked on this project, but I am happy to share my code with you. The repo contains a tool that converts a directory of audio files from monaural to binaural directional by giving the program directional inputs. All of the code can be found https://github.com/Andrew123098/ESI2022. Let me know if you have any questions about the code and I am happy to help.
On Sun, Oct 16, 2022 at 12:05 AM neo_numerics @.***> wrote:
Hi guys, I developed a certain framework so as to save multiple audio files of different listener-source positions to stream on voice chat application. I am in a similar deadline situation as @Andrew123098 https://github.com/Andrew123098 that I got very less time to develop code to save wav files. I tried the above suggestions but not able to see the bigger picture. I would be grateful if you post code clippings for the file_write part. Thanks a lot. --Muthu
— Reply to this email directly, view it on GitHub https://github.com/kcat/openal-soft/issues/627#issuecomment-1279908067, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMFNJEXMSVUBFSJH274JGY3WDOSLDANCNFSM5KOV2TGQ . You are receiving this because you were mentioned.Message ID: @.***>
-- Andrew M. Brown @.*** www.linkedin.com/in/andrewbrown1230 (310) 944-8813
— Reply to this email directly, view it on GitHubhttps://github.com/kcat/openal-soft/issues/627#issuecomment-1280023400, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AOIASWTZ6MKIPUNJAXDZNADWDRARBANCNFSM5KOV2TGQ. You are receiving this because you commented.Message ID: @.***>
Hello, I am trying to create a program that converts a monaural sound file and converts it to a binaural directional sound file using a source and listener placed at various different relative locations. I am trying to use this library to accomplish this task. I believe I have set up the audio scene properly, but I am running into trouble actually capturing the resulting binaural audio. Ideally, I would like to save it to a stereo .wav, .ogg, or .mp3 file, but I cannot figure out how to even save the data to a buffer or vector. So far I have been able to set up my buffers, source, and listener, and have enabled the HRTF to ensure directional audio. The sound plays through the speakers, but as far as I can tell, is not saved anywhere. Please let me know how you think I could go about solving this. I am happy to provide any of my codes to help. Thank you so much for your time.
PS: I soon have a deadline for this program to be finished, and while I hope that fixing this problem allows me to complete the program, I know there are probably other things that may go wrong or require additional help. Would you or anyone else you know with expertise in this API be willing to do a consultation? I would happily compensate you for your time.