Closed Tymotey closed 5 years ago
I think you should apply the filter on the streamer to avoid addional bandwidth cost for the presenter and the server who distribute the stream data.
When using computer browser, the streamer will use HTML5 audio element for playing the audio. So you need to force it to using WebAudioAPI. This still undocumented because it maybe changed on the future.
var audioStreamer = new ScarletsAudioBufferStreamer(3, 100, true /*default: false*/);
// audioStreamer.playStream();
// Initialize your filter
var ppDelay = ScarletsMedia.pingPongDelay();
// This must be initialized with AudioNode
// If this was set to false, the BufferSourceNode will
// be connected to `audioStreamer.audioContext.destination`
audioStreamer.outputNode = ScarletsMedia.audioContext.createGain();
// Connect the filter
// Stream (source) -> Ping pong delay -> destination
audioStreamer.outputNode.connect(ppDelay.input);
ppDelay.output.connect(ScarletsMedia.audioContext.destination);
Thank you!. I will try it later. But what about adding oscilloscope as streamer(or way to catch the audio and the perform some changes)?
On Sat, Jan 12, 2019, 09:32 StefansArya <notifications@github.com wrote:
I think you should apply the filter on the streamer to avoid bandwidth cost for the presenter and the server who distribute the stream data.
When using computer browser, the streamer will use HTML5 audio element for playing the audio. So you need to force it to using WebAudioAPI. This still undocumented because it maybe changed on the future.
var audioStreamer = new ScarletsAudioBufferStreamer(3, 100, true /default: false/);// audioStreamer.playStream(); // Initialize your filtervar ppDelay = ScarletsMedia.pingPongDelay(); // This must be initialized with AudioNode// If this was set to false, the BufferSourceNode will// be connected to
audioStreamer.audioContext.destination
audioStreamer.outputNode = ScarletsMedia.audioContext.createGain(); // Connect the filter// Stream (source) -> Ping pong delay -> destinationaudioStreamer.outputNode.connect(ppDelay.input);ppDelay.output.connect(ScarletsMedia.audioContext.destination);— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/ScarletsFiction/SFMediaStream/issues/2#issuecomment-453727098, or mute the thread https://github.com/notifications/unsubscribe-auth/ABShp_rUyaT9zz_36xYeTsajlOhhL44bks5vCY-pgaJpZM4Z8dJt .
The received audio buffer will be connected to audioStreamer.outputNode
when it's available.
And because of that, you can connect it to any audio node or your oscilloscope.
audioStreamer.outputNode.connect(/* AudioNode */);
Yey! Thank you for the info. I was able to add both my analyzer (oscilloscope) and my won effect:
For Streamer function:
// FILTERS
var biquadFilter = ScarletsMedia.audioContext.createBiquadFilter();
audioStreamer.outputNode.connect(biquadFilter);
biquadFilter.type = "lowshelf";
biquadFilter.frequency.setTargetAtTime(100, ScarletsMedia.audioContext.currentTime, 0);
biquadFilter.gain.setTargetAtTime(-90, ScarletsMedia.audioContext.currentTime, 0);
biquadFilter.connect(ScarletsMedia.audioContext.destination);
// ANALYZER
var analyser = '';
var dataArray = '';
var bufferLength = 0;
analyser = ScarletsMedia.audioContext.createAnalyser();
analyser.minDecibels = -90;
analyser.maxDecibels = -10;
analyser.fftSize = 256;
audioStreamer.outputNode.connect(analyser);
bufferLength = analyser.frequencyBinCount;
dataArray = new Uint8Array(bufferLength);
function draw_oscilloscope() {
var canvas = document.getElementById("oscilloscope");
canvas.width = 600;
canvas.height = 200;
var canvasCtx = canvas.getContext("2d");
/*setTimeout( function () { requestAnimationFrame(scope.draw_oscilloscope); } , 500);*/
analyser.getByteTimeDomainData(dataArray);
canvasCtx.fillStyle = 'rgb(200, 200, 200)';
canvasCtx.fillRect(0, 0, canvas.width, canvas.height);
canvasCtx.lineWidth = 2;
canvasCtx.strokeStyle = 'rgb(0, 0, 0)';
canvasCtx.beginPath();
var sliceWidth = canvas.width * 1.0 / bufferLength;
var x = 0;
for(var i = 0; i < bufferLength; i++) {
var v = dataArray[i] / 128.0;
var y = v * canvas.height / 2;
if (i === 0) {
canvasCtx.moveTo(x, y);
} else {
canvasCtx.lineTo(x, y);
}
x += sliceWidth;
}
canvasCtx.lineTo(canvas.width, canvas.height/2);
canvasCtx.stroke();
}
Now is pretty clear how to do this. I edited your library and stripped out all the media player, converter and plugins part,
Thank you for the help!
You're welcome!
And thanks for sharing your oscilloscope, maybe I can use it for fixing the sound gap between the stream buffer. Because it was happen on my PC when using 100ms latency and the 2 separated browser tabs have the user focus. I think it could be fixed by adding delay or additional buffer, but currently I'm working on other library so I will do it later.
Actually the code you provided can be improved by declaring any static variable like canvas
and canvasCtx
outside of draw_oscilloscope()
and only do the calculation inside that function.
The requestAnimationFrame
should be placed on the first line of that function without using timer because requestAnimationFrame
is supposed to work like that.
If you still find any bugs when adding filter to this library please reopen this issue.
@StefansArya thank you for the help I saw you updated the library. I used the code I sent and i get error: Uncaught TypeError: audioStreamer.outputNode.connect is not a function When trying to run:
// FILTERS
var biquadFilter = ScarletsMedia.audioContext.createBiquadFilter();
audioStreamer.outputNode.connect(biquadFilter);
biquadFilter.type = "lowshelf";
biquadFilter.frequency.setTargetAtTime(100, ScarletsMedia.audioContext.currentTime, 0);
biquadFilter.gain.setTargetAtTime(-90, ScarletsMedia.audioContext.currentTime, 0);
biquadFilter.connect(ScarletsMedia.audioContext.destination);
Ah yeah, I did it :)
// FILTERS
var biquadFilter = ScarletsMedia.audioContext.createBiquadFilter();
audioStreamer.outputNode = ScarletsMedia.audioContext.createGain();
audioStreamer.outputNode.connect(biquadFilter);
...
You're welcome!
And thanks for sharing your oscilloscope, maybe I can use it for fixing the sound gap between the stream buffer. Because it was happen on my PC when using 100ms latency and the 2 separated browser tabs have the user focus. I think it could be fixed by adding delay or additional buffer, but currently I'm working on other library so I will do it later.
Actually the code you provided can be improved by declaring any static variable like
canvas
andcanvasCtx
outside ofdraw_oscilloscope()
and only do the calculation inside that function.The
requestAnimationFrame
should be placed on the first line of that function without using timer becauserequestAnimationFrame
is supposed to work like that.If you still find any bugs when adding filter to this library please reopen this issue.
Hello I see you had time to work on library. Were you able to remove the gap between buffers? I switched to WebAudio for both streramers and presenters and switched to 100ms delay and i hear the gap. Where is it coming from?
Well, I'm not exactly sure where it's coming from.. But I have figuring it out a long time ago.
I think the gap is coming from the buffer that contain the media header before streaming process is started. When presenter started the recording, the library will trying to obtain the first buffer that contain the header. But somewhat it will have some sound/noise on the buffer (~60ms). And that buffer is being combined with all of incoming buffer on the streamer. So the streamer will play noise + real sound
.
I have figured out about the solution and the noise can be reduced.
And now about the remaining gap.. It's possible that there are some external latency when:
requestData
)
https://github.com/ScarletsFiction/SFMediaStream/blob/cfe7d59edff71e1fb8e048f47e3629538f6b5161/src/MediaPresenter.js#L128decodeAudioData
)
https://github.com/ScarletsFiction/SFMediaStream/blob/cfe7d59edff71e1fb8e048f47e3629538f6b5161/src/AudioBufferStreamer.js#L155realtimeBufferPlay
)
https://github.com/ScarletsFiction/SFMediaStream/blob/cfe7d59edff71e1fb8e048f47e3629538f6b5161/src/AudioBufferStreamer.js#L139When playing asynchronously, it's possible that the last buffer was finished playing and waiting for the next incoming buffer to be ready.
The possible solution for the gap is by adding extra delay in the library before playing the received buffer. So if you set the presenter to stream every 100ms, the streamer will play after 100ms after the buffer is received (So it have time to prepare the next buffer and play it synchronously after the last buffer is finished with onend
callback).
So for answering your question. I have figured it out but I can't find the working solution yet :sweat_smile:
If the above assumption is not correct. Then it's possible when obtaining audio data using requestData
every 100ms, the presenter will send a buffer that was playable for some duration.
For the example, I recorded constant sound with my microphone. I expect that all the buffer size will be similar, but somewhat there are a buffer that was smaller than the other. The 787bytes is received more often than 394bytes. So the real culprit maybe the 394bytes,
Hello. That's why i wanted added the BiquadFilter. I thought limiting the frequencies under 125Hz might solve the issue. I'm thinking here for a way to solve it: how about sending the empty buffer at a volume of -90 or something quiettt.
I was working for a stop function on my app, but seems like stop() function has error for streamer: https://www.screencast.com/t/OQAiRVNc
I did a test and had this result:
Steps:
Yeah, I have tried to send an empty buffer from the presenter by adding a filter on the first buffer before sending to streamer. But if we decode the audio data from the recorder we will have a WAV format and we need to encode it manually to other format.
But if we ask user/client to mute their microphone before recording it may fix the problem :laughing: . The nearest approach is decoding the buffer on the streamer and then clean the noise from the streamer.
By the way, I got the synchronous playback working with no noise and gap. But the another problem is the streamer must be active before the presenter. And it would be a problem when a new streamer want to join the stream..
I saw the updates done. When I start streaming i get these errors in console https://www.screencast.com/t/kOrncDuKCZ (ignore the first line) And one more thing: the mute function i did before(stop and then start presenting) is no longer working, but I will look into another solution for that(maybe send a empty stream). If you want i can add you to my repository, to see the code i am building
Can I know what version of your browser?
Currently I don't find any problem when using Firefox 62.0, Chrome 71.0, Chrome for Android 62.0, Native browser Android 7.1 for my testing.. Except the realtimeBufferPlay
still have some gaps..
There are another method for streaming with audioStreamer.receiveBuffer()
instead of audioStreamer.realtimeBufferPlay()
. It would have some latency but it does fix the gap and the noise.
And one more thing: the mute function i did before(stop and then start presenting) is no longer working, but I will look into another solution for that(maybe send a empty stream).
There are some breaking changes after the last update, maybe the implementation was different.
If you want i can add you to my repository, to see the code i am building
Hmm, maybe not now. I still have another stuff to finish before it's getting late..
@StefansArya 3 things: 1) thank you for your time responding, changing the library and debugging it with me. I know you are busy 2) when I told you about the repository was for debug purpose only, I didn't want to make it sound like I am forcing you 3) i reviewed the changes to library. I added in lib.js the library unminified and I run my app. When i want to stop streaming OR presenting I get the error: https://www.screencast.com/t/PW0zImyBZs
Again! Thank you for your time
never mind about 3, i resolved that. i still get some console errors, but i will look into them
Hello I am trying to add audio filters(biquad filter, https://developer.mozilla.org/en-US/docs/Web/API/BiquadFilterNode ) and an oscilloscope when streaming audio real-time. I tried different ways by adding the code in realtimeBufferPlay as a streamer. https://www.screencast.com/t/higK0Dal and https://www.screencast.com/t/NQ9v091yZH I think the problem is that the audiocontext is empty.
Can this be done? Or should I apply it to the presenter?
Thank you!