pedroSG94 / RootEncoder

RootEncoder for Android (rtmp-rtsp-stream-client-java) is a stream encoder to push video/audio to media servers using protocols RTMP, RTSP, SRT and UDP with all code written in Java/Kotlin
Apache License 2.0
2.57k stars 775 forks source link

Getting mic in webview during stream on Samsung phones #568

Closed about-prog closed 3 years ago

about-prog commented 4 years ago

Hello! I have a case where user streams and at the same time he is able to accept webrtc calls through webview overlay. It works ok on all androids except some Samsung Galaxies.

On Galaxy s10 (and other models too I think) voice calls work if the user doesn't stream, but if the user starts the stream first and then tries to make webrtc call, audio from the microphone is not "captured" (for webrtc call, audio in stream is ok). If the user starts voice call first and then starts the stream, then the sound is good. Any ideas on how to share mic when the stream started first?

pedroSG94 commented 4 years ago

I think you are opening microphone 2 times and this is not possible. In this case you have 2 ways:

about-prog commented 4 years ago

Thank you for your answer!

The problem is the second library is "built-in" in chromium and I didn't found any way how to manage audio recording in it.

Now I'm trying to shutdown stream audio for couple seconds to establish voice calls (if I make voice call first with webrtc and then start stream - then the sound is completely ok) and then "create audio" for stream again. I did try to stop audio with microphoneManager and audioRecord stop methods. After that the sound for a voice call is ok but I wasn't able to return audio on stream. And all this applies only to Samsung phones - others are ok.

Do you have hints on how to "reattach" audio during the stream?

pedroSG94 commented 4 years ago

If you only need re start microphoneManager you can configure microphone with createMicrophone method and start it with start method. After that, you should receive microphone data in inputPCMData callback or you will get a error in logcat if that microphone is in use.

Anyway I recommend you change microphone used as I explained you in the first way(you can't modify webrtc microphone but that library should use always same microphone so I think that if you can use the other microphone with my library and this should work).

about-prog commented 4 years ago

Changing the microphone didn't help. Also can't just stop the microphone and recreate it later and start.

pedroSG94 commented 4 years ago

Your only way now is get microphone buffer from webrtc library, duplicate this buffer and send it to my library. I hope you can modify webrtc library.

about-prog commented 4 years ago

Is it possible to release audioRecord and send "empty" sound for some time and after that recreate audioRecord. Like for mute but without actual audioRecord. As I understand sound capturing happens here:

private Frame read() { pcmBuffer.rewind(); int size = audioRecord.read(pcmBuffer, pcmBuffer.remaining()); if (size <= 0) { return null; } return new Frame(muted ? pcmBufferMuted : customAudioEffect.process(pcmBuffer.array()), muted ? 0 : pcmBuffer.arrayOffset(), size); }

If I try to stop microphoneManager and then microphoneManager.createMicrophone and start - on Samsung Galaxies s5 (Android 7 if I remember correctly) it works ok but on s8 and on s10 (both use Android 9) stream "crashes" during recreating microphone.

Can it be achieved with audioRecord.stop, audioRecord.release and then recreating audioRecord?

pedroSG94 commented 4 years ago

I don't know your crash but if it is related with that you have microphone used, if you stop and release that microphone you should can recreate it without problems. If you can share me that crash o know reason about this crash.

about-prog commented 4 years ago

I don't know your crash but if it is related with that you have microphone used, if you stop and release that microphone you should can recreate it without problems.

It recreated ok on Samsung s5, but on s8 and s10 it caused problems. And the problem might be on the player side. We are using Wowza server and I tried to use different video players (different versions of flowplayer and videojs with different plugins) to play rtmp, MPEG dash, and hls - and it did have different effects on playback but it all was not good.

Now I've returned to your advice to share buffer data between two libraries. I found that it's possible to change the audio data of webrtc. If an audio record is not created (cause the microphone is busy by your library) webrtc creates "audiotrack" anyway but put there not real audio data but just an array of NaN. So if I change this data I can get noise on the other side of a voice call.

I'm trying to use data that is received in inputPCMData of Camera1Base class. So I'm passing the data of a frame to webview and trying to insert it to audiotrack but get the noise. If I understand this correctly the data in inputPCMData is raw audio. Now it's needed to process it through encoder that is used by webrtc?

pedroSG94 commented 4 years ago

inputPCMData produce exactly a PCM 16bit buffer. Also you can get AAC buffer in getAacData. Webrtc normally use Opus or vorbis codec so maybe you need encode it. To encode you have 2 ways: