Closed geovannimp closed 2 years ago
Multiple endpoints are not supported, at least not directly. To do what you're wanting you'll need think outside the box a bit.
An idea off the top of my head:
1) Initialize a ma_device
object for "Headset" and another for "Speaker"
2) Implement a custom node that represents the "Headset" node in your diagram. In this node, take the input data and write it to a buffer. Output silence from the node.
3) In the data callback for your "Headset" device, read from the buffer you wrote to in your custom node in point 2.
4) In the data callback for your "Speaker" device, read straight from the node graph (ma_node_graph_read_pcm_frames()
).
With this idea, you'll need to wire up the output of the custom "Headset" node (point 2) to the node graph's endpoint so that it actually get's processed. Since you'll be outputting silence from this node, nothing will get mixed (but you will still incur the CPU cost - maybe something for me to look at later). I'm also not sure how synchronization would work with this since you'll have two different devices running independently of each other with maybe slightly different timings.
Happy to elaborate further on this if anything wasn't clear. This section in the documentation covers custom nodes: https://miniaud.io/docs/manual/index.html#NodeGraph. I can give you some advice on that if need be.
Thanks, this is really similar to what I had in mind.
I think synchronization is not a problem, at least now.
Is ma_node_graph_read_pcm_frames
really necessary?
I was thinking about a custom node like ma_node_device
that have (1 or many) inputs and zero outputs.
With this, I would have a more generic implementation that enables me to have as many output as I want.
Do you see a problem doing this way?
Yes you'll need to call ma_node_graph_read_pcm_frames()
at some point because that is what triggers the processing of all other nodes (it reads from the inputs to the endpoint node, which reads from their inputs, etc., etc.) and gives you the final output. I just figure you'd might as well call it from your primary device's data callback.
It doesn't make sense to have a node with zero outputs. All nodes must have an output or else they can't be connected to the graph and therefore be processed. All nodes must ultimately be connected to the endpoint. In case it wasn't clear, the node graph uses a pulling style system where you pull from the endpoint, which pulls from it's input nodes, which pull from their input nodes, etc. You need something to actually trigger that process, and that's what ma_node_graph_read_pcm_frames()
is for.
I'm suspecting what you've got in your head is the idea of multiple endpoints where you read from one in one of your device's data callbacks, and then read from another in another device's data callback. That's just not natively supported in miniaudio's node graph system unless you do some kind of workaround like I outlined earlier. I've added an item to my TODO list to investigate the viability of multiple endpoints, but there's no guarantees whatsoever I'll commit to it or any kind of timeframe.
Thank you @mackron, I'll probably go with the 'buffer clonning' idea.
I didn't expect this to be implemented any time soon, just opened a feature request because was the closest type of issue to my my doubt.
I can keep this issue updated if I get something working, cause can help someone in the future. :D
I actually experimented with this the other day after I wrote that earlier response and I don't think I'll be supporting multiple endpoints unfortunately. It's actually quite complicated to manage this in a generic way and it enforces complications onto the user which will just result in questions and invalid bug reports that I'll need to deal with. In particular, you would need to call ma_node_graph_read_pcm_frames()
with a consistent frame count for each endpoint and I just know people will stuff this up to no end.
Another complication is with triggering a reprocessing of the graph. You would need to make sure all of your reads are structured and ordered properly or else you'd glitch. Consider a scenario with 2 endpoints:
1) Read from endpoint 0 - This triggers a processing of the graph for the specified frame count. 2) Read from endpoint 1 - This does not trigger a full reprocessing because it's read from cached data in the splitter which was filled in step 1. 3) Read from endpoint 0 - This triggers a fresh processing.
This is a correct order of processing. Now consider if the user messes up the ordering:
1) Read from endpoint 0 - Process the graph 2) Read from endpoint 0 - Process the graph again 3) Read from endpoint 1 - Glitches because it missed the processing from step 1.
There's just too many avenues for error by supporting this and I don't want to deal with the bug reports and questions. I'm open to suggestions from the community for ideas on how I could do a less error prone API for this though.
This node graph system is hot off the press to I'm certainly interested to hear any kind of suggestions and feedback, good or bad. Looking forward to see what you and the community come up with!
Something that might be useful with this proposed graph setup is a new flag I've just added to the dev branch called MA_NODE_FLAG_SILENT_OUTPUT
. It's set in the node's vtable and it gives miniaudio a hint that it should treat the output as silence which lets it do a small optimization by not spending any CPU time mixing it. If you were to use this, it's best to leave the output buffer and output frame count unmodified in the custom node's processing callback.
I think the original question has been answered so I might go ahead and move this one over to the discussion section so it can act as a reference for those who have a similar question.
Hey, first thanks for the amazing project.
I'm trying to use the node_graph, but I'm not sure about the correct way to implement an output to two different devices.
I'm working in a DJ Mixing software and I need to have a "pre-listen". The idea is to listen the music even if the primary output is muted.
A have the following diagram that shows what I'm trying to do:
The diagram shows the whole idea, but my problem is just with the final part: send the audio signal to two devices after the splitter.