Closed frastlin closed 4 years ago
I just did some more testing on Chrome on IOS and did the following:
that.panning.node = context.createStereoPanner()
Is there a function I can call to init Wad after a user event on the page?
Wad.js tries its best to resume the audio context automatically after a user interacts with the page. It shouldn't be necessary, but you can call Wad.audioContext.resume()
to manually start the audio context.
For your second point, Safari on iOS does not support StereoPanner nodes. You'll have to use 3D panning instead. Wad is supposed to detect that stereo panning is unavailable and automatically use 3D panning instead, but there was a bug in that code. If you pull down version 4.7.6, panning should work better on iOS.
Thanks for raising this issue. I hope you're enjoying using Wad.js.
Yes, it works! I wouldn't say the following code pans from the hard-left to hard right, so a warning about pannerNode not working would be very nice:
handleClick(e){
this.sound.play({
panning: -1
})
setTimeout(this.slide(this.sound, 0.5, 1), 1900)
}
slide(sound, time, end){
return ()=>sound.setPanning(end, time)
}
Wait, MDN says PannerNode works on Safari in IOS V14
The PannerNode
that you linked is for 3D panning. Most of the properties/methods on that page are for positioning the sound source in 3D space. What you want is the StereoPannerNode
, which is not available on iOS Safari.
Since stereo panning isn't actually available, the number you pass in for panning is the horizontal position of the sound source, in 3D space. You can't pan a sound 'hard-left' with 3D panning, because even if a sound is coming from the left side of your head, you still hear it with both ears, it just sounds quieter in your right ear. You can use values larger than 1 or smaller than -1, but that'll also make the sound quieter, since it's further away.
I don't think the simulated stereo panning function takes into account the user's [Wad.audioContext.listener.positionX.value, Wad.audioContext.listener.positionY.value, Wad.audioContext.listener.positionZ.value]
I now hear the sounds on IOS though, which is great!
OK, I am going crazy, how does one get the Wad.audioContext.listener position in Safari? I've tried: listener.pos, listener.position, listener.x, listener.X, listener, but everything is empty.
It's not you, I think safari is crazy. If you check out the browser compatibility for audio listeners in safari, you can see that safari doesn't support any properties of the audio listener, except for speedOfSound
and dopplerFactor
, oddly enough.
As a workaround, you can assume that the listener's position starts at (0,0,0), and only changes when you call listener.setPosition()
. If you keep track of what you set the position to, I think that should work.
Could Wad add a Wad.listener object to keep track of the audio listener and make the interface uniform?
Thanks for contributing this feature!
Hello, I am unable to get WAD working on the latest Safari, Firefox, or Chrome on IOS. Here is my link: https://frastlin.github.io/Nonvisual-Modeling-and-Mapping/magicalbridge Press the speaker test button to try it out. I did some searching around and found: https://stackoverflow.com/questions/46363048/onaudioprocess-not-called-on-ios11/46534088#46534088
It looks like there needs to be some "resuming" of the audio context after user interaction. It looks like webkitAudioContext is handled though.