Open liuyuan22 opened 10 months ago
Custom audio data can't be published at the moment. It's on our roadmap though.
Custom video data can be published by implementing VideoCapturer
and passing frames through to the capturerObserver.onFrameCaptured
callback. You probably have more efficient formats (i.e. texture buffers or I420), but worst case scenario, you can use BitmapFrameCapturer
as a starting point.
Ok thanks, where can I see your roadmap?
Hi all!
As @liuyuan22, I'm really interested into this topic. I hava an H264 stream within height, width, fps, frame data (byte[]) and pts. I really want to send it to a livekit deployed server.
Regarding the android sdk documentation, I am not able to find VideoCapturer
and capturerObserver.onFrameCaptured
.
Where can I find related documentation?
Thanks in advance! :blush:
@hardenerdev That would be org.webrtc.VideoCapturer
, and the capturerObserver
is passed in the initialize
method there.
This is a simple implementation based on pushing bitmaps. You can try adapting it for your own purposes.
@davidliu Hi, any updates on this? I have an IP camera that supports RTSP streaming. I‘m able to publish the video data to LiveKit, but can't find a way to publish the audio data.
@Orienser sorry, no updates here.
Is your feature request related to a problem? Please describe. We have a small camera. The camera has a wifi module and provides an Android SDK. Our app uses the SDK to connect to the camera. The camera sends video and audio data through the SDK. How can I send this data through your SDK? Thanks~
Describe the solution you'd like None.
Describe alternatives you've considered None.
Additional context None.