ThibaultBee / StreamPack

SRT/RTMP/RTMPS live streaming libraries for Android
https://thibaultbee.github.io/StreamPack/index.html
Apache License 2.0
196 stars 70 forks source link

Configure CameraRtmpLiveStreamer to write to a file in parallel to the streaming to RTMP server? #63

Open brnmr opened 1 year ago

brnmr commented 1 year ago

Is there a way to configure the CameraRtmpLiveStreamer to write the stream to a file? Maybe exposing the muxer property, so we pass in writeToFile = true?

class CameraRtmpLiveStreamer(
    context: Context,
    enableAudio: Boolean = true,
    initialOnErrorListener: OnErrorListener? = null,
    initialOnConnectionListener: OnConnectionListener? = null
) : BaseCameraLiveStreamer(
    context = context,
    enableAudio = enableAudio,
    muxer = FlvMuxer(context = context, writeToFile = false),
    endpoint = RtmpProducer(hasAudio = enableAudio, hasVideo = true),
    initialOnErrorListener = initialOnErrorListener,
    initialOnConnectionListener = initialOnConnectionListener
)
ThibaultBee commented 1 year ago

Hi, There aren't any streamer to record and stream at the same time but it is not impossible: you could duplicate the BaseStreamer and add another endpoints (maybe MediaMuxer).

I have been thinking about this feature for a while and still can't make my mind about it for several reasons:

1/ live streaming is lot to handle for a device (specially for long live). Asking more of the device could be overwhelming (specially for low-cost device) 2/ for live streaming, the encoders are set at relatively low bitrate because of the limited bandwidth, using the same encoded frames for the record is kind of a shame. 3/ It is possible to duplicate encoders but it might get 1/ worst

As I focus on video and audio quality more than on features, I guess I won't implement that soon.

Do you plan to record on all devices?

brnmr commented 1 year ago

@ThibaultBee thank you very much for your input on the topic.

The idea is to have the recorded file as a backup in case the streaming didn't go well. The recorded file can be then watched by the interested parties. So, yes, we want to record on all devices. It will be one and the same mid-range device.

ThibaultBee commented 1 year ago

I understand the idea but I don't have time to think about it or to develop it. There are load of things I want to tackle before.

And you can already do it on your own from BaseStreamer.

nasko700 commented 1 year ago

Hello @ThibaultBee I'm trying to implement the logic using MediaMuxer (thank you for the hint). I don't use an Endpoint, but just duplicated the BaseStreamer class and use MediaMuxer object on the same places like your muxer: IMuxer. Could you tell me if the calling of mediaMuxer.writeSampleData() should be inside the onOutputFrame of muxListener or I'm missing something? Currently, the created file is empty.

private val muxListener = object : IMuxerListener {
        override fun onOutputFrame(packet: Packet) {
            try {
                endpoint.write(packet)
                synchronized(this) {
                        val currentTrackIndex = if (packet.isAudio) audioTrackIndex else videoTrackIndex
                        mediaMuxer.writeSampleData(currentTrackIndex, packet.buffer, bufferInfo)
                } 
...
ThibaultBee commented 1 year ago

Hi, As MediaMuxer is a muxer+file, you just be to add it right after the encoders (see https://github.com/ThibaultBee/StreamPack/blob/main/docs/assets/streamer.png) After the muxer the frames will be have FLV or TS headers.

I don't know if you have read that: https://github.com/ThibaultBee/StreamPack/blob/main/DEVELOPER_README.md

Also, I have an old branch where I worked on a muxer based on MediaMuxer. See https://github.com/ThibaultBee/StreamPack/blob/experimental/mediamuxer/core/src/main/java/io/github/thibaultbee/streampack/internal/muxers/mediamuxer/MediaMuxerEndpoint.kt (Unfortunately, I do not remember the state of the MediaMuxer)

cdiddy77 commented 8 months ago

I am also needing to support this functionality. As far as I can tell, there are a couple of limitations to this approach, IIUC:

I am taking a different approach, which is to enable multiple ICameraStreamers to share the same CameraSource, starting by making ISurfaceSource keep a list of encoder surfaces.

I realize that this will result in separate simultaneous encoding and that may not work for some or all devices.

ThibaultBee commented 8 months ago

I am also needing to support this functionality. As far as I can tell, there are a couple of limitations to this approach, IIUC:

* The audio and video config must necessarily be the same for both streaming and recording. This is not desirable. In our scenarios, the ability to record is a workaround for the limited network bandwidth -- they can only stream at very limited bitrates with lower resolution, but if we can record at higher resolution, then the user can still have a high-res copy for later manipulation.

* The turning on and off of the stream and the recording must occur conjointly, whereas for our users, it is more useful to turn streaming on and off independently of each other. In particular, we allow the user to change the resolution of their RTMP stream "on-the-fly" which of course requires re-cycling the rtmp connection. This creates a gap in the livestream. We would prefer to not have a gap in the recording, if possible.

This is exactly what I expect to implement but it is not an easy task. Recording should not suffer from a bad live stream.

I realize that this will result in separate simultaneous encoding and that may not work for some or all devices.

It is possible to run multiple encoder of the same type at the same time. I am under the impression that every type of encoder can run multiple sessions. And as encoders come with the SoC it will work on all devices. It did something like this like 4 years ago.

The issue of having multiple encoding is not an encoder limitation. It is the heat. Running (encoder gpu cpu) * 2 + modem + camera + screen will make your phone heats a lot. To protect itself the phone will activate an internal security mechanism called CPU throttle and the result of this is lower performance (missing frames,...). The problem already exists with very long live on phone.

That's mostly why I haven't developed this feature. I don't like both choices.

cdiddy77 commented 8 months ago

If there is a way to recognize when the heat-motivated throttling is occurring, I can encourage my user to make good choices, ie buy a special case with a fan.

On Mon, Feb 5, 2024 at 12:46 PM Thibault Beyou @.***> wrote:

I am also needing to support this functionality. As far as I can tell, there are a couple of limitations to this approach, IIUC:

  • The audio and video config must necessarily be the same for both streaming and recording. This is not desirable. In our scenarios, the ability to record is a workaround for the limited network bandwidth -- they can only stream at very limited bitrates with lower resolution, but if we can record at higher resolution, then the user can still have a high-res copy for later manipulation.

  • The turning on and off of the stream and the recording must occur conjointly, whereas for our users, it is more useful to turn streaming on and off independently of each other. In particular, we allow the user to change the resolution of their RTMP stream "on-the-fly" which of course requires re-cycling the rtmp connection. This creates a gap in the livestream. We would prefer to not have a gap in the recording, if possible.

This is exactly what I expect to implement but it is not an easy task. Recording should not suffer from a bad live stream.

I realize that this will result in separate simultaneous encoding and that may not work for some or all devices.

It is possible to run multiple encoder of the same type at the same time. I am under the impression that every type of encoder can run multiple sessions. And as encoders come with the SoC it will work on all devices. It did something like this like 4 years ago.

The issue of having multiple encoding is not an encoder limitation. It is the heat. Running (encoder gpu cpu) * 2 + modem + camera + screen will make your phone heats a lot. To protect itself the phone will activate an internal security mechanism called CPU throttle and the result of this is lower performance (missing frames,...). The problem already exists with very long live on phone.

That's mostly why I haven't developed this feature. I don't like both choices.

— Reply to this email directly, view it on GitHub https://github.com/ThibaultBee/StreamPack/issues/63#issuecomment-1928065907, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABDG2KIWIFYHCQTIQBFKUODYSFALJAVCNFSM6AAAAAAUGQBCQCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMRYGA3DKOJQG4 . You are receiving this because you commented.Message ID: @.***>

ThibaultBee commented 8 months ago

Never tested but https://developer.android.com/reference/android/os/PowerManager.html#getCurrentThermalStatus() could help.

cdiddy77 commented 7 months ago

After a great deal of learning and hacking trying to get it working, I have come to a reset point. There are a couple of problems, one internal, one external:

After encountering this, I tried a couple of things. The most optimistic was to re-enable the "does not have a surface" video encoder pathway, in the hopes that I could send frames to the encoder the old fashioned way, with software, as is done with audio encoding. But the CameraSource does not support getFrame, so, yeah, that won't fly.

I am taking a step back, and am going to try a different approach which addresses both of the above:

I welcome your thoughts on all of the above, and thank you for all your hard work in getting StreamPack to do what it does. I will speak with my employer about making a well-deserved contribution, once we ship the android app.

As a related aside, I have already shipped an app on iOS that does dual output with independent resolution settings. It uses HaisinKit.swift, which supports this directly. I certainly don't understand how it does what it does, because I did not need to modify it. Our users absolutely love the feature, which is why I am investing the time and energy in getting it working before shipping our android version.

ThibaultBee commented 7 months ago

Hi,

Wah this is a very long message. First, you don't have to use the BaseStreamer you can obviously create yours :) You can use your own filters,.. without even forking this repository.

Indeed, using a lot of surfaces directly on the camera won't be compatible with a lot of devices.

The CodecSurface is responsible for GPU processing on image (like scaling, rotation and mirroring (for front camera)). If I would implement a Streamer that have both recording and streaming, I would use this CodecSurface to get frame from the camera and send them to your 2 encoders (MediaCodec). This CodecSurface should not be in the VideoMediaCodecEncoder. It should be separate components. The behavior of this CodecSurface is also a bit sketchy because it uses OpenGL internally 😵

I haven't comment the internal code, sorry about that. It is definitely possible but it is certainly not an easy path. Keep on going 👍

cdiddy77 commented 7 months ago

haha, thanks and yes, I talk alot. In the startup world, its pretty rare when a problem consumes an entire week of my life. Most PRs are, like, 2 days at most.

sharmashivanu91187 commented 5 months ago

@cdiddy77 Do you got any success in saving videos locally?