techyian / MMALSharp

C# wrapper to Broadcom's MMAL with an API to the Raspberry Pi camera.
MIT License
195 stars 33 forks source link

Ensuring 10fps when using inMemoryHandler #79

Closed dommy1C closed 5 years ago

dommy1C commented 5 years ago

Using the inMemeoryHandler to deal with images in byte array and bypassing the automatically saving to disc routine I have this:

  public List<byte> WorkingData { get; set; }

    public ByteArrayCaptureHandler()
    {
        this.WorkingData = new List<byte>();
    }

    public void Dispose()
    {
    }

    public ProcessResult Process(uint allocSize)
    {
        return new ProcessResult();
    }

    public void Process(byte[] data)
    {
        this.WorkingData.AddRange(data);

        if (data.Length - 2 > -1)
        {
            if (data[data.Length - 2] == 255 && data[data.Length - 1] == 217)
            {
                NewImage?.Invoke(WorkingData.ToArray());
                WorkingData.Clear();
            }
        }
    }

    public void PostProcess()
    {
    }

}

I find this works well.

However, I wish to throttle the fps to 10 frames per second. I am sure I can make sure I only pass 10 frames a second to my subscriber by comparing timediffs etc but what I was hoping to achieve was forcing the camera to only take 10 frames per second. Is this achievable?

techyian commented 5 years ago

Hi, thanks for raising the issue here, it means I can post more in a comment compared to Twitter.

You can keep your capture handler as you currently have it. However instead of calculating your timediffs in there, you can move the timediff code to a callback handler - in your case, you'll be wanting to use the FastImageOutputCallbackHandler class.

As described in this gist, you can see I calculate the timediffs in the following block of code:

if (eos)
{
    if (DateTime.Now - _start > TimeSpan.FromSeconds(1))
    {
        MMALLog.Logger.Info($"FPS: {_fps}");
        _fps = 0;
        _start = DateTime.Now;
    }
    _fps++;

     // In rapid capture mode, provide the ability to do post-processing once we have a complete frame.
     this.WorkingPort.Handler?.PostProcess();
}

There is a DateTime at the top of the class called _start and an int called _fps.

The reason I say the callback handler is a better place to calculate is because you have raw access to the buffer header object, and that will tell you whether the piece of data you're currently dealing with is the end of the file. We only want to increase the fps value if we're dealing with the end of a frame and the eos boolean will be true if the data is the end of the image frame.

As an example piece of code, this is what I'm currently using to test this functionality and it's returning between 10-11fps:


    MMALCameraConfig.SensorMode = MMALSensorMode.Mode5;
    MMALCameraConfig.VideoFramerate = new MMAL_RATIONAL_T(10, 1);
    MMALCameraConfig.VideoResolution = new Resolution(810, 648);

    using (var splitter = new MMALSplitterComponent(null))
    using (var imgEncoder = new MMALImageEncoder(null, continuousCapture: true))            
    using (var nullSink = new MMALNullSinkComponent())
    {
            cam.ConfigureCameraSettings();

            // Create our component pipeline.         
            imgEncoder.ConfigureOutputPort(0, MMALEncoding.JPEG, MMALEncoding.I420, 90);

            cam.Camera.VideoPort.ConnectTo(splitter);
            splitter.Outputs[0].ConnectTo(imgEncoder);
            cam.Camera.PreviewPort.ConnectTo(nullSink);

            // Camera warm up time
            await Task.Delay(2000);
            await cam.ProcessAsync(cam.Camera.VideoPort);
    }         

I haven't used any capture handlers in this example as I'm not interested in saving the image frames, I just want to demonstrate the FPS calculation.

Does this make sense?

dommy1C commented 5 years ago

Whilst I understand what you have done I admit I am still confused to practically implenting it.

In my handler event:

public void Process(byte[] data)

I am checking the data as it comes in. Once I read a jpg eof I pass the event to my subscriber

This event:

NewImage?.Invoke(WorkingData.ToArray())

goes to my custom code to deal with jpeg array.

I use this:

_fps = FrameCounter.CalculateFeedRate();

public class FrameCounter
{
    private static int lastFrameRate = 0;
    private static int lastTick = 0;
    private static int frameRate = 0;

    public static int CalculateFeedRate()
    {
        if (System.Environment.TickCount - lastTick >= 1000)
        {
            lastFrameRate = frameRate;
            frameRate = 0;
            lastTick = System.Environment.TickCount;
        }
        frameRate++;
        return lastFrameRate;
    }
}

to calculate my fps which gives me on start up up to 17 fps.

After 12 hours it reduces to 4-5fps

techyian commented 5 years ago

Sorry, from our previous Twitter conversations I was expecting you to edit the source FastImageOutputCallbackHandler class just to test this, but I agree I've not explained this as best as I possibly could.

When MMALSharp receives image frame data, it passes this to a Callback Handler class which will subsequently call a Capture Handler, please note the differences between the two.

If you don't want to edit the source FastImageOutputCallbackHandler class, you will need to create a new Callback handler - please use the gist I made for you as your guide here, just give it a different name so it doesn't conflict with FastImageOutputCallbackHandler.

Once you've made your new callback handler, you can register it to the Image Encoder's output port:


    MMALCameraConfig.SensorMode = MMALSensorMode.Mode5;
    MMALCameraConfig.VideoFramerate = new MMAL_RATIONAL_T(10, 1);
    MMALCameraConfig.VideoResolution = new Resolution(810, 648);

    using (var splitter = new MMALSplitterComponent(null))
    using (var imgEncoder = new MMALImageEncoder(null, continuousCapture: true))
    using (var nullSink = new MMALNullSinkComponent())
    {
        cam.ConfigureCameraSettings();

        // Create our component pipeline.
        imgEncoder.ConfigureOutputPort(0, MMALEncoding.JPEG, MMALEncoding.I420, 90);
        imgEncoder.RegisterOutputCallback(new MyTestOutputCallbackHandler(imgEncoder.Outputs[0]));

        cam.Camera.VideoPort.ConnectTo(splitter);
        splitter.Outputs[0].ConnectTo(imgEncoder);
        cam.Camera.PreviewPort.ConnectTo(nullSink);

        // Camera warm up time
        await Task.Delay(2000);

        await cam.ProcessAsync(cam.Camera.VideoPort);
    }

Please forget about your custom capture handler for now - you can introduce this again later if you want.

This outputs between 10-11 fps to the console window for me so I know it definitely works and I've had it running for a number of hours now without it dropping in framerate.

dommy1C commented 5 years ago

Hi, i am still not getting it :( Visually, I am updating a canvas control on a html page with each jpeg captured. I can see the flow of the images when I wave my hand.

I will then go to bed,

After about 12 hours I will go to my monitor and wave my hand at the camera.

My hand is now ''choppy".

The fps is showing 4-5fps.

This is true when my code (obviously) is incuded in the test to process the images,

But, it is also true if I comment out my code and leave it running for 12hours.

So, my code is not impending the results.

In my bytearrayhandler which is the same as your memory handler when I physically have a new jpeg I can inform my fps routine,

What is the difference between the Capture and Callback in the context of the fps?

I am quite happy to edit the source but when I did my handler a few weeks ago that inmemoryhandler did not exist u c.

If I save the files to disk the number of frames that are save per second is reduced during the time period I mentioned.

techyian commented 5 years ago

Have you tried using the Camera Mode 5, does it make a difference at all?

MMALCameraConfig.SensorMode = MMALSensorMode.Mode5;
MMALCameraConfig.VideoFramerate = new MMAL_RATIONAL_T(10, 1);
MMALCameraConfig.VideoResolution = new Resolution(810, 648);

Using this config, my output shows 10fps, no higher, no lower. What does your output show using this config? I agree I haven't tested this over a 12 hour period. I can certainly give it a try though.

dommy1C commented 5 years ago

yes, i am on that now. It made no difference I am afraid. BTW, appreciate all your time you giving on this.

techyian commented 5 years ago

What fps does it start off with? If your fps calculation is correct it should output 10fps. It certainly does on my end. Not sure what's going wrong on your side.

dommy1C commented 5 years ago

it averages around 14 fps.

But like i said after a few hours it starts to drop.

When you set the fps does that instruct the camera to only output 10fps? The reason I ask that was because I was wondering whether our respective Pis were differently configured which could give us opposing results.

Additionally, the whole point of reducing and controlling the fps was that I wanted to see if it would reduce the 'slowdown' after a few hours that was occurring.

All, I am doing is measuring the number of complete jpegs I receive every second in my byte handler routine,

techyian commented 5 years ago

It should force it to output 10fps, or as close to that as realistically possible (I get between 10-11fps). I am using the 1st gen camera module but on mode 5 both are capable of outputting 10fps.

I will leave my pi running and see if there is any slowdown by the morning.

dommy1C commented 5 years ago

OK :) I am using Raspberry Pi 3 b+ and I have enabled zram thanks for your assistence

dommy1C commented 5 years ago

Hi, I have to admit I am further confused because I had sent you my code/application before and you told me you can replicate the problem and that you would need to write something in C to test why this is happening?

techyian commented 5 years ago

I've had the camera running for approximately 12 hours and using the settings I provided in my example I'm still seeing 10fps. Sometimes it drops by a few fps but nothing drastic.

Yes I am aware of the slowdown you experienced and I still need to do some more experimenting.

I think it is a configuration issue you're having or the way you calculate your fps isn't accurate.

dommy1C commented 5 years ago

Hi, Well you have seen my code. It is what you actually sent to me ie configuration. The fps is accurate (not much different to yours). Like I said I can visually see the degradation of the fps and if start my camera off and save each jpeg received I will be saving 13-14 fps initially which goes to 4-5fps after a few hours. So, not sure what to say? Are you using ur inMemory handler? How else are u validating ur results? Are you saving to SD Card? nb The fps is based on the number of jpegs I receive each second and not the video rate. I will not be able to use ur framework if I cannot get a guaranteed fps of jpegs received - again not video fps.

dommy1C commented 5 years ago

Hi, OK I will stop pestering you, You do not seem to appreciate what I am stating. I suggest in your inMemory Handler that for every completed jpeg you do your fps count there.

I will have to look for a different framework. Really appreciate your time and good luck with this project of yours. Just remember If I experience the problem then other people in time will do so also.

Thanks

techyian commented 5 years ago

Hi,

Thank you for giving the library a try. I have been looking over the documentation at Picamera as it is very detailed and it does mention rapid capture. Please see here

There are a few things I will point out from what I've read:

The major issue with capturing this rapidly is firstly that the Raspberry Pi’s IO bandwidth is extremely limited and secondly that, as a format, JPEG is considerably less efficient than the H.264 video format (which is to say that, for the same number of bytes, H.264 will provide considerably better quality over the same number of frames). At higher resolutions (beyond 800x600) you are likely to find you cannot sustain 30fps captures to the Pi’s SD card for very long (before exhausting the disk cache).

If you are intending to perform processing on the frames after capture, you may be better off just capturing video and decoding frames from the resulting file rather than dealing with individual JPEG captures. Thankfully this is relatively easy as the JPEG format has a simple magic number (FF D8). This means we can use a custom output to separate the frames out of an MJPEG video recording by inspecting the first two bytes of each buffer:

I appreciate your comments, but please be aware that this is an open source project which I maintain in the little free time I have available. Everyone is able to see the source code, help out and make suggestions/pull requests.

dommy1C commented 5 years ago

Hi,

Thank you for giving the library a try. [It has been a pleasure]

The major issue with capturing this rapidly is firstly that the Raspberry Pi’s IO bandwidth is extremely limited [Yes, I know that. Which is why I was also looking at the Tinker board. BUT, saying that the bandwidth does not come in to play as I am not using the network. I am just processing on the Pi itself]

JPEG is considerably less efficient than the H.264 video format [Yes I have worked with different cameras over the years and I am well aware if the limitation/quality of lossless jpegs over codecs such as h.264. However, it jpeg suits me for my purposes and not h.264]

you are likely to find you cannot sustain 30fps captures to the Pi’s SD card for very long (before exhausting the disk cache). [Yes, this was my whole point of limiting to 10fps and not 30 fps etc so I am confused why you think I want to have 30fps in the 1st place?]

If you are intending to perform processing on the frames after capture, you may be better off just capturing video and decoding frames from the resulting file rather than dealing with individual JPEG captures. [No, this is not the way I want to go. I want to process the mjpeg from the video stream. I did it with the windows desktop app I had connected to a digital camera with no probelms, in fact it prived more efficent rthat worked with encoded files]

Thankfully this is relatively easy as the JPEG format has a simple magic number (FF D8). This means we can use a custom output to separate the frames out of an MJPEG video recording by inspecting the first two bytes of each buffer: [Which is plretty much what I was doing with the bytearrayhandler I used within your framework]

I appreciate your comments, but please be aware that this is an open source project which I maintain in the little free time I have available. Everyone is able to see the source code, help out and make suggestions/pull requests. [I do know, I really do appreciate all your time you have given me. Which is why I offered pay you some cash. I think the annoying for me (and I was not that annoyed really as you are a nice person just trying to help) was that you were failing to understand that I was measuring the fps in terms of how many jpegs I receive via the bytearrayhandler. It was erratic. I was never referring to the fps of the video stream. So the issues was to do with the port that was give me the byte arrays]

[If I get a solution I will add/suggest back to this page]

[thanks]