Closed oshibka404 closed 4 years ago
According to the faust2firefox
-generated block diagram, my Faust code doesn't have any inputs. So as far as I can tell, it shouldn't call methods accessing the device's microphone.
According to the launch logs, audioCategory
is set as kAudioSessionCategory_MediaPlayback
and hardware input channels count is 0
2020-06-29 19:01:32.165687+0200 Runner[2135:119121] Metal API Validation Enabled
SetParameters fDevNumInChans = 0 fDevNumOutChans = 2 bufferSize = 256 samplerate = 44100
AudioCategory kAudioSessionCategory_MediaPlayback
Get hw input channels 0
Get hw output channels 1
Get hw sample rate 44100.000000
Get hw buffer duration 0.023220
preferredPeriodDuration 0.005805
preferredPeriodDuration 0.005805 actualPeriodDuration 0.005805
inputLatency in sec : 0.000000
outputLatency in sec : 0.012766
Get kAudioUnitProperty_MaximumFramesPerSlice 1156
- - - - - - - - - - - - - - - - - - - -
Sample Rate:0.000000
Format ID:mcpl
Format Flags:29
Bytes per Packet:4
Frames per Packet:1
Bytes per Frame:4
Channels per Frame:2
Bits per Channel:32
- - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
Sample Rate:0.000000
Format ID:mcpl
Format Flags:29
Bytes per Packet:4
Frames per Packet:1
Bytes per Frame:4
Channels per Frame:2
Bits per Channel:32
- - - - - - - - - - - - - - - - - - - -
But the app still requests the microphone access on the first launch.
The low-level code is there if you want to debug: https://github.com/grame-cncm/faust/blob/master-dev/architecture/faust/audio/coreaudio-ios-dsp.h
kAudioSessionCategory_MediaPlayback
is supposed to be used with output only programs. So I don't really know why the system asks for permission. (see https://github.com/grame-cncm/faust/blob/master-dev/architecture/faust/audio/coreaudio-ios-dsp.h#L235)
closed by mistake, sorry for the noise
The iOS application I am creating using
faust2api
is a synthesizer not currently using audio inputs at all.By default, when launched for the first time, the application asks the user to provide it with Microphone access.
It still works fine no matter you allow or disallow, however it affects the overall first launch experience anyway.
I'm wondering if there is a way to build the C++ API in a way that wouldn't require microphone access?
Thanks in advance.