Open joy-mollick opened 2 years ago
Hi,
So I ran into this issue as well. Basically the audio stream does not have an audio header to it, so what ever you're using to play the audio, isn't able to know the bitrate, sample rate, etc... to effectively play the sound.
The data is essentially a .wav file, but without the header, so it is technically not a .wav file.
What we do is store or stream the audio as base64, then when we want to play the audio we convert each base64 chunk into a binary array using Buffer.js, we then concat the buffers, convert the combined Buffer array back to base64, prepend a base64 header string to the new combined base64 audio string, then save as a binary .wav file and play it!
I used the following snippet to generate the base64 header strings. https://codepen.io/mxfh/pen/mWLMrJ
We currently have the package live streaming audio over web-sockets and playing on another device walkie talkie fashion. Just takes a bit of work.
The data is essentially a .wav file, but without the header, so it is technically not a .wav file.
What we do is store or stream the audio as base64, then when we want to play the audio we convert each base64 chunk into a binary array using Buffer.js, we then concat the buffers, convert the combined Buffer array back to base64, prepend a base64 header string to the new combined base64 audio string, then save as a binary .wav file and play it!
I used the following snippet to generate the base64 header strings. https://codepen.io/mxfh/pen/mWLMrJ
Hi @dorthwein ,
Any chance you could help me on achieving live streaming microphone audio (and receiving and playing it)?
Before streaming it using socket.io, I decided to play the audio I'm getting from LiveAudioStream.on("data", (data) => {})
, and so far I'm stuck in this section and found out your comment, but I still couldn't get it to play. I prepared below function based on your answer:
const genWavHeader = () => {
const B = (f: number, b = 4) =>
String.fromCodePoint.apply(
this,
Array(b)
//@ts-ignore
.fill()
.map((v, i) => (f >> (i * 8)) & 255)
);
const D = (f: number) => B(f, 2);
// tweak this:
const sampleRate = options.sampleRate; // keep this above 4000 hz
const ch = options.channels; // channels
const bits = options.bitsPerSample; // multiples of 8
const samples = sampleRate;
const s1 = 16;
const s2 = (samples * ch * bits) / 8;
const header =
"RIFF" +
B(4 + (8 + s1) + (8 + s2)) +
"WAVEfmt " + //chunksize
B(s1) + //subchunk1size
D(1) + //format
D(ch) + //channels
B(sampleRate) +
B((sampleRate * ch * bits) / 8) + //Byte rate
D((ch * bits) / 8) + // align
D(bits) +
"data" + //8 Bits per sample
D(s2); // subchunk2size
const headerBase64 = Buffer.from(header).toString("base64");
return headerBase64;
};
And this is how I'm using it:
import { Audio } from "expo-av";
LiveAudioStream.on("data", (data) => {
const header = buildWaveHeader(options.bufferSize);
const headerBase64 = arrayBufferToBase64(header);
const audio = "data:audio/wav;base64," + headerBase64 + data;
const soundObject = new Audio.Sound();
try {
soundObject
.loadAsync({
uri: audio,
})
.then(() => soundObject.playAsync());
} catch (error) {}
})
But this error arises:
Error: com.google.android.exoplayer2.ParserException: Error while parsing Base64 encoded string: UklGRiTCgAIAV0FWRWZtdCAQAAAAAQABAETCrAAAQMOECgAQAMKAAGRhdGEAwoACAA==n1ZJXaxkunf/f/9//3//f/9/xXu6ekFpS1gJU8 ...
hey @theInfiTualEr, were you able to get this library to actually play the audio? I am trying to use 1 library for both visualizing and playing audio and this lib looks like it can be used for the former but so far I'm not able to actually play the audio chunks..
Any update?
How we can play this!!!!
hey @theInfiTualEr, were you able to get this library to actually play the audio? I am trying to use 1 library for both visualizing and playing audio and this lib looks like it can be used for the former but so far I'm not able to actually play the audio chunks..
Hey @TowhidKashem
My goal was to record, transfer, and play audio data between Android and Web devices, and I ended up doing it with some tricks and by doing the entire thing, even the socket.io part, natively and on Java.
So if you're only targeting Android, and want to record and simultaneously play the audio, my recommendation is to do it natively and using AudioRecord to record, NoiseSuppressor and AcousticEchoCanceler, and AudioTrack to play it.
I have an implementation of them, but it's for my specific need, and I'm not an expert, so it has redundant parts as well.
I remember there was a React Native library, handling recording audio, STOPING the recording, THEN playing and visualizing the audio (so not my goal). You can study that as well.
Hey @theInfiTualEr, My goal is to record and send audio data using this lib in Expo too. It wonder if you make that work, I dont see any data is captured and I wonder what is missing in my code..
async function startRecording() {
try {
if (permissionResponse.status !== 'granted') {
console.log('Requesting permission..');
await requestPermission();
}
await Audio.setAudioModeAsync({
allowsRecordingIOS: true,
playsInSilentModeIOS: true,
});
// Start audio recording
const recording = new Audio.Recording();
await recording.prepareToRecordAsync(Audio.RECORDING_OPTIONS_PRESET_HIGH_QUALITY);
await recording.startAsync();
console.log("Connection established, starting recording..");
// setRecording(recording); // Uncomment if you need to keep track of the recording
// Listen for audio data and send it
LiveAudioStream.on('data', (data) => {
console.log('Data received:', data);
connection.send(data);
});
} catch (err) {
console.error('Failed to start recording', err);
}
}
I think the problem is that you're recording the audio using Expo Audio
and capturing it using LiveAudioStream
.
Recording and getting data using this library was easy and is documented. Playing it live was the real challenge for me.
Perhaps dive into native code of this library and see how it works, check out native modules documentation on React Native website and see how it works, and do more research.
I start this streaming and get base64 chunk from on.data () but I want to play this chunk again , I mean to say I say over audio streaming and hear same thing playing on my phone . How can we make this , it is incomplete ,if we can't make it audio file again and play . Plz help me to work this issue out