Open donmccurdy opened 7 years ago
I am quite sure that the story is:
I am not aware of any plans that this might change with iOS11.
There are some threads on Stackoverflow with workarounds that were all somehow 'fixed' by Apple. From Apples standpoint this is the desired behaviour because they don't want their users to max out their bandwidth unknowingly.
I can try to come up with some explanation and example code for Audio Sprites on the sound section and a reference to that in the FAQ, if desired.
@dirkk0 thanks! Do you know if mobile Chrome in Android is a different story? I'm seeing some information suggesting gestures are also required there, but I haven't tried this.
I just did, and everything works without gestures on Android: http://curious-electric.com/w/experiments/aframe/audio/
(slightly unrelated - .wav file support seems broken in A-Frame 0.6.1 (or rather in ThreeJS 0.84): http://curious-electric.com/w/experiments/aframe/audio/index61.html I thought this was related to #2754 but it also fails on direct load (line 21))
[EDIT: links changed]
Thanks! Opened #2974. Not sure about the WAV issue; maybe another issue for that?
@wav I will try to find out if this is a ThreeJS problem or not and then add an issue either over there or here.
How would I proceed regarding the audiosprite example? This won't be a trivial one-liner that I can add to the docs, I'm afraid. Where would it make sense to add an example? Right now my sample code is on my personal webspace: http://curious-electric.com/aframe/audiosprite/
I could create a glitch or codepen from that?
An amusing workaround is to use speechSynthesis. I found that gaze-triggered interactions in A-Frame can trigger new speech events on iOS, if the user has at least once tapped the screen on an element that triggers speech. (e.g. a silent utterance on a START button)
For example, if you had a balloon-popping game, this would work:
speechUtterance = new SpeechSynthesisUtterance(); speechUtterance.lang = 'en'; speechUtterance.rate = .8; speechUtterance.pitch = 1.6; speechUtterance.name = "Fiona"; speechUtterance.text = "Pop!"; speechSynthesis.speak(speechUtterance);
It doesn't seem like much, but audio is so important to VR that even little sounds to confirm actions can go a long way. It's probably best to do real SFX for Android and desktop, then have this alternative up your sleeve for iOS.
What is the current state of this issue? I have tried implementing a resume() on the audioContext upon user click of a DOM element. I am seeing that it's state changes to "running", but I'm getting a black screen instead of entering VR. It would be great if there was a clear instruction on best methods for implementing sound that is compatible with iOS. I'm using https://github.com/digaverse/aframe-resonance-audio-component to instantiate audio, and perhaps the init functions triggering before unlocking the audio context is causing some issues.
It looks like howler.js mostly works on iOS. IIRC it works by pre-cooking a big soundSprite, so that may not work with the resonance component.
For anyone looking for a solution to this, I wrote a small A-Frame component for playing audio in iOS using howler.js, as mentioned above. Sounds are played when the camera moves within a certain distance of the object.
<a-scene>
<assets>
<audio id="mysound" src="mysound.mp3" preload="auto"></audio>
</assets>
<a-box src="url(images/box.png); audio="src: #mysound; loop: true;"></a-box>
</a-scene>
I just wanted to say this is a great idea. Many of my students are troubled by how sound don't work as expected when using A-Frame. I created a "basic" tutorial for creating a user gesture that also loops through all "ambient sounds" and starts them after the user gesture here for them: https://glitch.com/edit/#!/aframe-1hr-intro?path=12_UserGesture.html%3A82%3A10
Maybe a quick starting point (for having an example of a workaround included) without having to bring in other libraries ...
Lots of questions in Slack about audio support on mobile, whether there is a workaround for playing without user gesture, etc. We should have a note, either in the FAQ or the
sound
component docs, explaining current state-of-affairs on this.Personally, I'm not actually sure of the fully story myself. I think that audio without a gesture is completely impossible on iOS (although you can use an audio sprite started when the scene loads with a gesture). Is that also true on Mobile Chrome? Or can the gesture requirement somehow be worked around there?
/cc @dirkk0 + #1463