Closed deokju closed 4 years ago
To counteract any edge effects that might happen during 'decoding', some additional samples are being decoded in each batch. After decoding these are cut out. (See for (var i = 0; i < decodedData.numberOfChannels; i++) audioBuffer.getChannelData(i).set(decodedData.getChannelData(i).slice(pickOffset, -pickOffset));
). That's why I copy more data for the decoder than I erase from the data buffer.
Sorry.... What's mean edge effects?
Since decodeAudioData is used to decode the int samples into float, there is no guarantee that we get packages that are suitable for continuos back-to-back playback. E.g. when you put in audio with 44100 Hz sample rate it might get converted into 48000 Hz depending on the OS/device. This will lead to cases where the samples don't match up, giving you a buffer that is either longer or shorter than the initial raw buffer. These effects will only effect the very first and very last samples of the buffer, thus the term 'edge effects'. When I leave a bit of the previous data in the buffer the decoder has that data available and can do better sample rate conversion.
Your answer really helps me. I am not good at English. Please understand even if the sentence is strange. We need your help desperately. Thank you in the future. Thank you so much
Hi JoJoBond. I am always grateful. I don't understand you source file in 3las.formatreader.js ` AudioFormatReader_WAV.prototype.ExtractIntSamples = function () { // Extract sample data from buffer
`
Why are the cut lengths different? I think... 0, this.TotalBatchByteSize and.. this.BataBuffer.buffer.slice(this.TotalBatchByteSize)
I wait for your answer.