Closed jagthedrummer closed 1 year ago
Two players/buttons that have the same url are supposed to stay in sync—that's what that docs page was demonstrating. That's by design from when this was first built for WNYC radio (and is still pretty clutch for how I'm using it for other radio/audio applications)
But the pause issue you mention and it getting stuck is not supposed to happen. 🤔
Oh oh, I misunderstood—I see what you pointed out now on the docs, those are different urls so they should not do that.
In trying to work around this I ended up retrieving the current sound and calling pause
on it before I try to play a new sound. And that didn't work until I added a slight delay after calling pause, and then everything works great. So I think it must be some sort of timing issue.
Strangely (at least to me) a delay of only 1 millisecond consistently gets it to work, and without it, it consistently doesn't.
Here's what I'm doing that gets things to work in my app.
@service currentlyPlaying;
async stopCurrent(){
let currentlyPlayingBounce = this.currentlyPlaying.get('bounce');
if(currentlyPlayingBounce && currentlyPlayingBounce.id != this.args.bounce.id){
let currentlyPlayingSound = currentlyPlayingBounce.get('sound');
if(currentlyPlayingSound){
currentlyPlayingSound.pause();
// For some reason this tiny delay makes things work in mobile Safari, without it clips get "stuck" in "playing"
await timeout(1);
}
}
}
togglePlaySoundTask = task(async () => {
await this.stopCurrent();
let { sound } = await this.stereo.play(this.identifier);
this.args.bounce.set('sound', sound);
this.currentlyPlaying.set('bounce', this.args.bounce);
});
The currentlyPlaying
service doesn't implement any functionality, it's just a global stash to keep track of which bounce is current.
Also weird: await timeout(0)
also makes things work, but removing the await timeout
entirely makes it misbehave.
I think I found the issue, and it's related to an ancient section of code in the NativeAudio
connection that instructs it to use a single shared <audio>
element when on a mobile device (which I'm not sure is necessary anymore in 2023, honestly). Somewhere in the transferring of control of that element, an event wasn't getting fired.
I just released 4.2.2, which should resolve that issue.
Give it a shot and see if it removes the need for your workaround!
Thank you, @jkeen! 4.2.2 has things working perfectly without my stopCurrent
workaround. I appreciate the quick response, and thanks again for this awesome add-on!
@jagthedrummer great to hear, and glad you're finding it useful! Thanks for the top-notch bug reports.
Also, question for you… how are you generating those audio waveforms and displaying them on the frontend? It seems like we both have Ember + Rails apps that deal with audio and I've made a couple of attempts at building something that attaches into ember-stereo to do that, but it never landed. Is that using bbc's peaks.js?
@jkeen that's wavesurfer.js displaying those waveforms.
I went with that at first because it can analyze the audio in the browser to generate the waveform, so I could integrate it without needing to setup backend processes to analyze audio files. But given that I'm trying to avoid downloading audio files before I need them I did end up using audiowaveform in a backend process to pre-analyze the files.
Wavesurfer is intended to be both a visualizer and a player, so there are a few rough edges when trying to use it only for visualization, but so far I've been able to work around all of the sticking points.
One of my next big projects is going to be adding the ability to comment on specific timestamps and regions within a bounce, so it'll be interesting to see how that goes. It looks like Wavesurfer.js should be able to do everything I need, but if not I may investigate Peaks.js.
Cheers!
Oooh it looks like wavesurfer has been updated since I last visited and it looks nicer than I remember! Thanks for pointing me to that again.
I built this thing to record live radio using node + ffmpeg where it records a stream in 10 second chunks, uploads each chunk to backblaze b2 in a particular named format, and then on playback rails builds an HLS file (m3u8) with a manifest of signed urls pointing to the consecutive audio chunks, and ember-stereo plays it back happily using hls.js.
There are times when the end users want a single audio file of a section of broadcast and I have a background job that pieces together all the audio and runs it through audiowaveform for the waveform data, but I never got the waveform displaying part working on the frontend as I what I really needed is for each chunk to get analyzed individually and the joined waveform data displayed on playback, but I stalled out on trying to figure out how to join the json or binary data audiowaveform outputs together into a single file and have yet to revisit
@jkeen Wavesurfer has an option to "pre-render" a waveform with just peaks data, and the peaks data in JSON format is just an array of numbers. So I'd guess that you could combine a few peaks arrays via JS and then pass that data (along with a duration) to wavesurfer and get it to render.
@jagthedrummer Oooh good suggestion! Thanks
I have multiple players on a page and things are funky in mobile Safari in the following scenario:
After doing that both players show a pause button, but only one sound is playing.
When this happens the sound that was played first ends up being stuck in "playing" mode and it's impossible to get it to play again without reloading the page.
If I pause the currently-playing sound before starting another one everything works as expected.
The problem also extends past just two players. If there are 5 players on a page and you hit all 5 play buttons a single time each, they'll all show that they're currently playing.
I've been able to reproduce the problem on the docs site for this add-on, on this page: https://ember-stereo.com/docs/playing-sounds