mapbox / mapbox-navigation-ios

Turn-by-turn navigation logic and UI in Swift on iOS
https://docs.mapbox.com/ios/navigation/
Other
863 stars 313 forks source link

During turn-by-turn directions, the volume buttons should always affect the spoken instruction volume #1826

Open akitchen opened 6 years ago

akitchen commented 6 years ago

The phone's volume buttons currently only adjust the spoken instruction volume if toggled while an instruction is playing. Ideally we could configure the audio session differently so that the spoken instructions' volume can be adjusted at any time, as it can be difficult to adjust the volume during a relatively short instruction.

1ec5 commented 5 years ago

According to the AVAudioPlayer.volume documentation, the preferred way to expose the system audio volume setting is by displaying an MPVolumeView. However, to be sure, having the physical volume buttons affect the spoken instruction volume at all times would better match user expectations. Iā€™m not sure what options exist other than the hack of always playing silent audio. šŸ™…ā€ā™‚ļø

carstenhag commented 3 years ago

If I understand the issue description correctly, we are having the same issue.
Also, tapping the mute button while there's an ongoing voice instruction does not mute it.

Neither of these things are the case on mapbox-navigation-android.

carstenhag commented 2 years ago

Hello there, I went through some old tickets and saw this one. Can you please have a look at this? Thanks! :)

johnnewman commented 1 year ago

Hello! Any updates on this issue?

akitchen commented 1 year ago

None of the mapbox colleagues on this issue work at mapbox any longer, but I can say -- this is also how it works in Apple Maps due to how iOS treats these audio streams. You are likely going to need to solve this in your own application integration.

jeannustre commented 9 months ago

Hello,

I am interested in this topic as well, and was wondering what the best way of "solving it in our own application" would be.

I found this StackOverflow answer which describes a way to fix this behaviour by using a combination of the .playback and .ambiant categories for the AVAudioSession.

To be precise, here is what I've found to be working :

  1. We set managesAudioSession = false on our speech synthethiser, to prevent the SDK from automatically re-configuring the session. In our case, this is a MultiplexedSpeechSynthesizer, but I guess it should work on "simple" synthesizers too.

  2. We set ourselves (in our case, a UIViewController) as its SpeechSynthesizingDelegate.

  3. Before the navigation starts, we do the following :

    let session = AVAudioSession.sharedInstance()
    try? session.setCategory(.ambient, options: .mixWithOthers)
    try? session.setActive(true)
  4. In the delegate's willSpeak function, we do the following :

    let session = AVAudioSession.sharedInstance()
    try? session.setActive(false, options: .notifyOthersOnDeactivation)
    try? session.setCategory(.playback, options: .duckOthers)
    try? session.setActive(true)

    This seems to effectively "prepare" the session for turn-by-turn directions, ducking any other audio currently playing. (I only tested it with Spotify as of now)

  5. In the delegate's didSpeak function, we do the following :

    let session = AVAudioSession.sharedInstance()
    try? session.setActive(false, options: .notifyOthersOnDeactivation)
    try? session.setCategory(.ambient, options: .mixWithOthers)
    try? session.setActive(true)

    This allows any other playing audio (eg. Spotify) to keep playing (unducked at this point), and the volume controls to keep affecting both the "main" volume and the volume for the next, upcoming turn-by-turn instruction. This seems to effectively give us control of the turn-by-turn instructions volume whenever we press the volume controls, independently of an instruction currently being played or not.

Notes :

  1. I am not sure the setActive calls around the setCategory calls are required here. I just took inspiration from the SO answer and did not try to refactor yet ; they may very well be useless.

  2. This behaviour seems to break when we use the setCategory prototype that uses a mode: AVAudioSession.Mode parameter, whatever we would set this parameter to, be it .default or .voicePrompt (which would be the most logical in our case). If you try this in your app, make sure to use the setCategory prototype that doesn't take a mode.

I am very interested in hearing feedback from any of you willing to try this.

I will of course report back with updates if we find any caveats to this solution. Cheers !

jkgz commented 7 months ago

@jeannustre Thank for you very much for the hints here. Your code is working for me so far. I experimented with the setActive calls and they do seem to be necessary in order for the ducking and resuming of other audio to work correctly.

In willSpeak, if the first setActive call is not made, then the external audio stops (rather than ducks) when the speech starts.

In didSpeak, if the first setActive call is not made, then the external audio stays ducked.

I was looking for Apple documentation on this, but from my experiments, it seemed that I needed to call setActive(false, options:) before calling setCategory for the category to take effect. Curious what your experience has been with this code.