Closed magicspacekiwi closed 7 years ago
You should render manually when use RenderingAudioContext.
const fs = require("fs");
const path = require("path");
const audioEngine = require("web-audio-engine");
const RenderingAudioContext = audioEngine.RenderingAudioContext;
const audioData = fs.readFileSync(path.resolve("./Kalimba.wav"));
const rAudioContext = new RenderingAudioContext();
const source = rAudioContext.createBufferSource();
const analyser = rAudioContext.createAnalyser();
rAudioContext.decodeAudioData(audioData).then((audioBuffer) => {
source.buffer = audioBuffer;
source.connect(analyser);
// prepare to render audio
source.start(0);
analyser.connect(rAudioContext.destination);
// render 0 to 1 second
rAudioContext.processTo(1);
// get data
const array = new Uint8Array(analyser.frequencyBinCount);
analyser.getByteFrequencyData(array);
console.log(array);
console.log(`${ array.length } items`);
});
or more frequent analyse use with ScriptProcessorNode
const fs = require("fs");
const path = require("path");
const audioEngine = require("web-audio-engine");
const RenderingAudioContext = audioEngine.RenderingAudioContext;
const audioData = fs.readFileSync(path.resolve("./Kalimba.wav"));
const rAudioContext = new RenderingAudioContext();
const source = rAudioContext.createBufferSource();
const analyser = rAudioContext.createAnalyser();
const scriptProcessor = rAudioContext.createScriptProcessor(512, 1, 1);
rAudioContext.decodeAudioData(audioData).then((audioBuffer) => {
source.buffer = audioBuffer;
source.connect(analyser);
// prepare to render audio
source.start(0);
analyser.connect(scriptProcessor);
scriptProcessor.connect(rAudioContext.destination);
scriptProcessor.onaudioprocess = (e) => {
e.outputBuffer.getChannelData(0).set(e.inputBuffer.getChannelData(0));
// get data
const array = new Uint8Array(analyser.frequencyBinCount);
analyser.getByteFrequencyData(array);
console.log(array);
console.log(`${ array.length } items`);
};
// render 0 to 1 second
rAudioContext.processTo(1);
});
Ah, okay. I used the first method - worked great! 🎉 Thank you.
Hi there,
Sorry if this is the wrong place to ask for support. If there's a better place, please let me know!
I've been trying to use WAE to get the frequency data from an audio file. I think I understand how to load the audio data into a buffer, but I'm not sure how to use the API to step through and capture data with something like
getByteFrequencyData
For example, in the native browser Web Audio API, I'm able to just play the audio data back and use
getByteFrequencyData
during each 'timeupdate' event.How could I do something similar here? (I'm trying to get this working on a Node server) Thanks!
Here's what I've got so far.