RustAudio / coreaudio-rs

A friendly rust interface to Apple's Core Audio API.
Apache License 2.0
203 stars 38 forks source link

No input in `feedback` and `feedback_interleaved` #116

Open rosingrind opened 1 month ago

rosingrind commented 1 month ago

Summary

So the feedback examples from 0.12.0 are not working - there's no audio data captured (mic device is opened and there's an orange dot indicator in macOS status bar):

  1. Create new project with cargo new --bin
  2. Add coreaudio-rs = "0.12.0" to dependencies in Cargo.toml
  3. Add https://github.com/RustAudio/coreaudio-rs/blob/b671130aca0e3eef25e194e2bfa66a8ad58a8829/examples/feedback.rs or https://github.com/RustAudio/coreaudio-rs/blob/b671130aca0e3eef25e194e2bfa66a8ad58a8829/examples/feedback_interleaved.rs to src/main.rs
  4. You'll get all the output cb {} frames and no input cb {} frames logs

Info

simlay commented 1 month ago

Hi! Thanks for the report! While I am kinda the active maintainer of this repo, I'll admit that I don't know a ton about the feedback examples. Looking at the git history, @HEnquist (sorry for the tag, feel free to tell me it's unwanted) authored those examples 3 years ago. 3 years is a bit of time but maybe Henrik has got some ideas.

My intuition is that macOS has changed since #83. Similarly, iOS 17 has AVSession issues in certain cases. If you were to debug this more and submit a PR, it'd be very welcome.

HEnquist commented 1 month ago

I'll take a look. I'm not aware of any changes in macOS that cause trouble and I have not needed to make any changes to my own projects to run on the latest macOS versions. I have no idea about iOS.

It's quite common to struggle with permissions. Have you allowed the terminal app to access the microphone?

HEnquist commented 1 month ago

I just tried the feedback example and it works fine here on Sonoma.

This example is perhaps a bit too simple, it runs at 44.1 kHz but it does not try to switch the capture device to this sample rate. If the default capture device (likely the built in microphone) is set to another rate, the input callback won't get called.

So, open Audio MIDI Setup and make sure that both the default capture and playback device is set to 44.1 kHz. Or use another value, and adjust the SAMPLE_RATE variable in the example accordingly.

rosingrind commented 1 month ago

Seems like SAMPLE_RATE const may be replaced with interactive querying of input device sample rate:

    let sample_rate: f64 = input_audio_unit.get_property::<f64>(
        kAudioUnitProperty_SampleRate,
        Scope::Input,
        Element::Input,
    )?;

then using it in in_stream_format and out_stream_format structs, or setting kAudioUnitProperty_SampleRate manually before starting output audio unit:

    output_audio_unit.set_property(
        kAudioUnitProperty_SampleRate,
        Scope::Input,
        Element::Output,
        Some(&sample_rate),
    )?;

The biggest problem with these examples is that they're not "just works" examples. It depends on luck (if your default input device sample rate is set to 44100.0), or you know just enough to make a fix yourself - this probably should be changed or directly pointed out in comments of these examples?

rosingrind commented 1 month ago

Anyway thanks, it works now! Also fyi @simlay, a bit of UX feedback:

  1. AudioUnit::set_sample_rate() is a bit misleading - it uses Scope::Input and Element::Output by default, which prevents setting sample rates on Element::Input (a default mic) for example. I know that you can't change Scope::Input on Element::Input (hardware mic) and Scope::Output on Element::Output (hardware speakers), and it probably has it's reasons, but just using set_sample_rate() on a mic device, getting your changes done, seeing via logging it's done and getting no actual difference in audio playback and behavior is misleading
  2. Are there any plans/work done on coreaudio-rs to support any Audio Graph API? I can't figure out how to convert sample rate on my hardware, is it really possible without direct coreaudio-sys Graph API usage? I mean I need AudioUnit::new(FormatConverterType::AUConverter) and:
    • either pass data from input_audio_unit.set_input_callback to convert_audio_unit.set_render_callback, and from convert_audio_unit.set_input_callback to output_audio_unit.set_render_callback (this is not possible, can't create convert_audio_unit.set_input_callback)
    • or make an AUGraph, chain devices input_audio_unit -> convert_audio_unit -> output_audio_unit and get it done entirely on hardware (could be wrong? idk, may I ask your opinion, @HEnquist?)
HEnquist commented 1 month ago

The biggest problem with these examples is that they're not "just works" examples. It depends on luck (if your default input device sample rate is set to 44100.0), or you know just enough to make a fix yourself - this probably should be changed or directly pointed out in comments of these examples?

Yes there is certainly room for improvement. There is a challenge in that we can't know what devices have been set as the defaults and how they are configured. I think it would be good to mention this somewhere, and recommend that the examples are run with the built in devices chosen as defaults. But it would be good to at least try switching the sample rate to show how it's done.

2. Are there any plans/work done on coreaudio-rs to support any Audio Graph API?

I can only answer for myself here. I haven't needed this so I haven't looked at it. But IMO it would make sense to include support for this.

  • or make an AUGraph, chain devices input_audio_unit -> convert_audio_unit -> output_audio_unit and get it done entirely on hardware (could be wrong? idk, may I ask your opinion, @HEnquist?)

I think this second option is the correct way to do this, but I have no experience with actually doing it. I usually switch the hardware sample rate to the value I want, and avoid the need for conversions.