jsierles / react-native-audio

Audio recorder library for React Native
MIT License
1.16k stars 539 forks source link

Expose binary recording data to "progress subscribers" #138

Closed justinmoon closed 7 years ago

justinmoon commented 7 years ago

My objective to broadcast binary audio data over a websocket as it's being recorded.

It seems the data sent to "progress subscribers" contains only currentTime. Would it be possible to send include binary audio data, similar to the WebAudio API's ScriptProcessorNode.onaudioprocess callback?

Maybe I'm going about this in the wrong way? Any tips or feedback would be greatly appreciated!

justinmoon commented 7 years ago

For now I'm working to port cordova-plugin-audioinput to React Native.

This plugin allow you to subscribe to "audioinput" events, and I can stream those back to the server as the user types.

If I ever wrap my head around the native code I might try to send a pull request. No promises!

rakannimer commented 7 years ago

Hey superquest,

A pull request that does that would be great but keep in mind that sending large amounts of serialized data from the native to the JS thread might have a non-negligible performance cost.

If you're working on the native side than it might be a better idea to do the streaming logic from the native thread. You would only inform the JS thread about streaming progress.

Exposing a new method to AudioRecorderManager or adding a property to startRecording would be the most straight forward way to do that on Android.

Cheers

ohtangza commented 7 years ago

I also want to get byte stream in JS thread despite the non-negligible performance cost. I believe that every functionality should be supported purely with JS because this is what RN eventually heads for and audio processing is pretty doable in mobile device now (even in JS thread). Also, I believe audio byte stream is worth supported in that it became a basic component for many apps. As of 2016, lots of speech related APIs now got publicly accessible like Google Speech API and IBM Watson Speech recognitions, which supports real-time streaming. As @RakanNimer mentioned, it is better if we can selectively enable this feature as the developer needs.

By the way, the progress update is not currently available for Android mainly because MediaRecorder class, which react-native-audio relies on, does not support progress update.

To make it realized, for Android implementation, we need to use AudioRecord class instead of MediaRecorder class, which makes inconsistency with the existing code. It would not be easy to support this feature without changing the core of this library. How about iOS side, any idea? (I can only read for RN and Android codes.)

ohtangza commented 7 years ago

And, also, RN native codes may not support ByteArray delivered from native source code. How would it be delivered from native code to JS code. Perhaps, we need to encode it to Base64?

jsierles commented 7 years ago

If there's interest in this, happy to see a PR. Closing for now!

rohitgoyal commented 7 years ago

Is there any way i can get the recording sound in chunk to broadcast it to other clients via socket. If it is not supported can you help me how to convert the .aac file to a blob or arraybuffer?

MrLoh commented 7 years ago

@rohitgoyal basically you can't with any library. We are having the same problem. The issue is that there is no library that supports streaming chunks of data to the JS site. And there is no library to decode the aac file, as JS in react native doesn't have the audio context (unlike node and modern browsers) that all libraries depend on. We need to decode to the wave signal, soaybe you will have more luck finding an easier solution, but audio is really crippled at the moment in Reacht native. We are. Currently looking into porting the Cordova plugin to react native, but we are no experts on this either. If you come across any other solution, let us know.

rohitgoyal commented 7 years ago

@MrLoh According to you what is the best way currently to record a sound and broadcast to other users using socket.io on nodejs server? Currently I am thinking of recording the audio in .aac format and encoding it into base64 to send on socket. But then I am still not able to decode the base64 to .aac to play it on other clients.
I am using react-native-sound to play the sound.

MrLoh commented 7 years ago

@rohitgoyal As long as your en/de-coding is not audio specific and doesn't need an audio context, you should be fine, I guess. You will still have to write and read files all the time though and not sure how chunking the files would work well. It's sad that react native's js engine is so lacking.

Someone really need to port the cordova plugin though to do the relevant stuff on the native side as the js engine in react native is quite crippled when it comes to audio. And the current solution of reading and writing files is terribly unefficiant.

rohitgoyal commented 7 years ago

@MrLoh It is not audio specific. I used react-native-fs to encode the .aac to base64 string. However there is not support in react-native-fs to decode that back to .aac. Do you know of any way to play sound using base64 or a way to decode base64 back to .aac in react native?

MrLoh commented 7 years ago

@rohitgoyal no, sorry, I haven't worked with base64 in RN before.

ohtangza commented 7 years ago

@rohitgoyal I think base64 is just plan string with certain encoding rule, you can refer the resources on SO, I guess.

For your information, in my app's implementation, we worked on native code to stream the binary data to JS code using WritableArray. One issue here is that in Android implementation, the transferring through base64 encoding/decoding seems to be faster than the provided WritabelArray (https://github.com/facebook/react-native/issues/10504#issuecomment-317035887).

rohitgoyal commented 7 years ago

@ohtangza This seems weird. You tried to write the array in chunks as the recording was done? If yes can you show me as well how are you doing it, may be i would be able to put my brain to it as well. There is also a library react-native-webrtc for real time communication. It might be useful to see how do they stream the data.

MrLoh commented 7 years ago

@ohtangza Could you share your code?

ohtangza commented 7 years ago

@rohitgoyal @MrLoh You can refer the code here (https://github.com/hayanmind/react-native-audio/tree/feature/streaming-recording). We plan to make a pull request with this, but I am not sure if the author would accept it. In our application, streaming audio to JS code is pretty critical so we implemented it. Currently, this work is being done by https://github.com/ghsdh3409 who is my co-worker. Let us know if you have any idea or feedback. Ah, fyi, we did not integrate it to progress subscriber, just we make another port because in the native code, accessing to pcm data is not available for current implementation of this library (native library issue).

rohitgoyal commented 7 years ago

@ohtangza Which one is the code? Are you referring to react-native-audio-toolkit?

ohtangza commented 7 years ago

@rohitgoyal Oh, sorry. I added the link above. It's still work-in-progress, but the binary streaming function should work.

wangping0105 commented 6 years ago

@ohtangza hello, how about your branch (https://github.com/hayanmind/react-native-audio/tree/feature/streaming-recording), when would it be merged?

niocncn commented 6 years ago

any updates on it?

teslavitas commented 5 years ago

Is there a progress with live audio streaming on React Native? I found a react-native-audio-record library which returns base64-encoded sound online, but cannot make it work on iOS.

Kram951 commented 5 years ago

Hi, same problem as @teslavitas has.