Closed leafac closed 1 year ago
Hi Leandro!
Am I right in thinking that OnWriteMixedOutput() is called once per device, and that OnReadClientInput() is called once per client?
Yes. OnWriteMixedOutput is used if DeviceParameters::EnableMixing is true. If you set it to false, libASPL will instead use OnWriteClientOutput, and it will call it once per client. (Note: the mixing is done by CoreAudio, not by libASPL).
If so, should I have one circularBufferReadIndex per client?
Depends on your goals. If you disable mixing, you can store samples from each client in its own buffer.
Then you can for example do some routing, e.g. when client X reads samples, you may decide that it will read from buffer of client Y.
Or you can enable mixing and let all clients write to one shared buffer and read from the same buffer.
Right now there’s a huge time gap between audio going in and coming back up. I suppose that’s because circularBufferWriteIndex and circularBufferReadIndex are out of alignment. So perhaps I shouldn’t have circularBufferReadIndexs at all? But then how would I keep track of where each client should be in the circularBuffer?
Like 100 seconds (48000 * 100)? The size of your circular buffer is your latency. I don't think you can avoid using it at all, but you can make the buffer smaller.
Doing all this circular buffer management by hand seems silly. For one thing, I suppose it’s far from being thread-safe. What data structure implementation do you recommend? Is this what libASPL’s DoubleBuffer is for?
TBH I don't remember if CoreAudio calls read and write operations on one thread or on different. I would assume that they all are called on same thread, and thus they don't need to be thread-safe. But don't take my word for it, better checks in docs or test it (you can print thread id from read and write callbacks and check if they're always the same). If you'll firgure this out, would be nice if you could share results.
DoubleBuffer is a different thing. It's not a ring buffer, it stores a single value and provides lock-free getter, to avoid blocking real-time thread which calls that getter.
Yes, there are numerous ready to use ring buffer implementations, quick googling gives some examples:
They provide different guarantees, some of them are not thread-safe, some are thread-safe, but not lock-free, some are lock-free but are SPSC, etc.
My plan is to have a way for the user to create devices dynamically. I suppose it’s okay to create devices at any time and just AddDevice() and RemoveDevice() them as needed, right?
Yep.
From what I read in libASPL’s README & CoreAudio/AudioServerPlugIn.h the plugin runs in a sandbox. What’s the best way to communicate with the plugin to add/remove devices? I was thinking of spinning up an HTTP server listening on a socket in the temporary directory right from the plugin process. Is this even viable? Is there a better idea?
This makes sense. You can also use gRPC or XPC (apple-specific thing). Personally I'd prefer gRPC in this specific case.
In general, all mechanisms based on sockets or shared memory should work.
Still related to the sandbox: How do I store the devices that should be created so that configuration is persisted across runs of Core Audio? Should I use the so-called “Storage Operations” in CoreAudio/AudioServerPlugIn.h? Is there an abstraction for it in libASPL that I couldn’t find?
I don't know about this API, but from the comment it seems that's what you need: Note that the host provides a means for the plug-in to store and retrieve data from persistent storage.
libASPL does not provide wrappers for this.
It would be a suitable feature for libASPL though, so I'd be happy to accept a pull request :)
How do I control bit depth? DeviceParameters has a way of controlling the sample rate, but not the bit depth…
It's property of Format, which is property of Stream: https://github.com/gavv/libASPL/blob/main/include/aspl/Stream.hpp#L50
You may pass StreamParameters to Stream constructor, or use SetPhysicalFormatAsync() or SetAvailablePhysicalFormatsAsync().
Can libASPL synchronize the clock with other devices or will I run into issues similar to https://github.com/ExistentialAudio/BlackHole/discussions/27?
Nope, currently libASPL does not add any logic on top of CoreAudio and does not touch your samples.
Hi @gavv (and other authors),
First, congratulations on the fantastic job! This library is awesome! I’m getting my feet wet in C++ and lower-level programming (I’m mostly used to web development, programming-language theory, and so forth), and I was getting a bit frustrated with the amount of complexity in https://developer.apple.com/documentation/coreaudio/creating_an_audio_server_driver_plug-in, but then I found libASPL and managed to get close to a working prototype in a couple hours 👏
I’d love if you could give me a couple pointers to continue.
I’m building a loopback device. Similar to https://github.com/ExistentialAudio/BlackHole, https://github.com/mattingalls/Soundflower, https://rogueamoeba.com/loopback/, and so forth. Here’s as far as I’ve managed to go:
Code
```cpp #includeTo my surprise, audio is getting in and coming back out! 😁
But there are plenty of things I don’t understand yet:
Am I right in thinking that
OnWriteMixedOutput()
is called once per device, and thatOnReadClientInput()
is called once per client?If so, should I have one
circularBufferReadIndex
per client?Right now there’s a huge time gap between audio going in and coming back up. I suppose that’s because
circularBufferWriteIndex
andcircularBufferReadIndex
are out of alignment. So perhaps I shouldn’t havecircularBufferReadIndex
s at all? But then how would I keep track of where each client should be in thecircularBuffer
?Doing all this circular buffer management by hand seems silly. For one thing, I suppose it’s far from being thread-safe. What data structure implementation do you recommend? Is this what libASPL’s
DoubleBuffer
is for?My plan is to have a way for the user to create devices dynamically. I suppose it’s okay to create devices at any time and just
AddDevice()
andRemoveDevice()
them as needed, right?From what I read in libASPL’s README &
CoreAudio/AudioServerPlugIn.h
the plugin runs in a sandbox. What’s the best way to communicate with the plugin to add/remove devices? I was thinking of spinning up an HTTP server listening on a socket in the temporary directory right from the plugin process. Is this even viable? Is there a better idea?Still related to the sandbox: How do I store the devices that should be created so that configuration is persisted across runs of Core Audio? Should I use the so-called “Storage Operations” in
CoreAudio/AudioServerPlugIn.h
? Is there an abstraction for it in libASPL that I couldn’t find?How do I control bit depth?
DeviceParameters
has a way of controlling the sample rate, but not the bit depth…Can libASPL synchronize the clock with other devices or will I run into issues similar to BlackHole?
Thank you very much in advance.