Open MennoScholten opened 20 hours ago
Hello,
To send the detection to stream you can use AndroidViewFilterRender. You can create a XML, set it to the filter and modify that view in realtime. About the detection, this depend of your detection object API, what do you need to work? (buffer image format, resolution, etc)
Hi thanks for the quick reply. As for the object detection it requires a ImageProxy which will then be converted to a bitmap.
Can you work with an Image or a Bitmap directly? You can try use Camera2Base (SrtCamera2) or StreamBase (SrtStream) and use addImageListener to capture Image object from the camera.
rtmpCamera1.getGlInterface().takePhoto(new TakePhotoCallback() {
@Override
public void onTakePhoto(Bitmap bitmap) {
//Do what you need to do here with the image
}
Yes it should be possible to use the image/bitmap directly.
Could you provide me with some code ideas for the XML file implementation for the filter and the SrtCamera2 interface? as it is my first time working with openGL
Hi, I'm a student who is studying your API and i have some question about an implementation using your API
I would like to do some object detection on a video livestream and send the results (So a livestream video with bounding boxes) with SRT
For the object detection i will use TensorFlow-LITE. How can i extract such a videostream from your API, apply object detection, and send the stream through SRT to a remote server. I sincerely hope that you could give me some advice/directions in which way i should impelent your API.