techyian / MMALSharp

C# wrapper to Broadcom's MMAL with an API to the Raspberry Pi camera.
MIT License
195 stars 33 forks source link

Unable to get Bitmap from MemoryStreamCaptureHandler #181

Closed Kas-code closed 4 years ago

Kas-code commented 4 years ago

I want to be able to get a System.Drawing.Bitmap of the current camera image in memory without storing to disk.

Using a MemoryStreamCaptureHandler seems to be the correct approach, but I'm unable to create a Bitmap from the stream or bytes returned.

For example, doing the following doesn't work:

var captureHandler = new MemoryStreamCaptureHandler();
await cam.TakeRawPicture(captureHandler);
var bitmap = new Bitmap(captureHandler.CurrentStream);

It fails to create a Bitmap from the stream. Am I missing something? Could we put some extra details in the "Store to memory" section of the Basic Examples page to help those who want a Bitmap in memory?

techyian commented 4 years ago

This isn't going to work for a few reasons:

1) By default, MMALSharp will output image frames from the camera using a YUV420 pixel format, and this is not a supported pixel format of the Bitmap class. To override this, you will want to set the MMALCameraConfig.Encoding and MMALCameraConfig.Subformat properties to either MMALEncoding.RGB24 or MMALEncoding.RGB32/MMALEncoding.RGBA. 2) You are passing a raw image frame without telling the Bitmap class what the pixel format is it should be expecting. Also, you can't pass a raw frame as a stream directly to the Bitmap's constructor and must use the constructor Bitmap(int width, int height, PixelFormat format) or Bitmap(int width, int height, int stride, PixelFormat format, IntPtr scan0) instead. 3) Again, as you're passing a raw image frame, it's your responsibility to set the scan0 pointer to the memory location of the beginning of your raw frame data, this can be done by calling Marshal.Copy.

For example code which the library uses internally to do this, please see the ConvolutionBase class here. The code of interest to you is the LoadBitmap method, the locking/unlocking of the Bitmap data and also InitBitmapData.

Hope that helps.

Kas-code commented 4 years ago

Thanks for your help. I noticed that in the ConvolutionBase class you linked to it uses ImageContext to get the width, height and pixel format to pass in to the Bitmap constructor, but I'm unsure where to get ImageContext from in my example, or how in any other way to get the necessary parameters to pass in to the Bitmap constructor. The rest of what you explained is simple and I've been able to get this far:

MMALCameraConfig.StillEncoding = MMALEncoding.RGB32;
MMALCameraConfig.StillSubFormat = MMALEncoding.RGB32;
MMALCamera cam = MMALCamera.Instance;
var captureHandler = new InMemoryCaptureHandler();
cam.ConfigureCameraSettings(captureHandler, null);
await cam.TakeRawPicture(captureHandler);
var outputBytes = captureHandler.WorkingData.ToArray();
techyian commented 4 years ago

Don't worry about the ImageContext object, that's an object used for processing image frames internally. In the MMALCameraConfig class, there are static properties you can pass through: MMALCameraConfig.Resolution.Width and MMALCameraConfig.Resolution.Height and then the PixelFormat enum represents the pixel format you chose e.g. PixelFormat.Format24bppRgb.

Here is a rough example, not tested:


public async Task LoadBitmap()
{
    MMALCamera cam = MMALCamera.Instance;           
    MMALCameraConfig.Encoding = MMALEncoding.RGB24;
    MMALCameraConfig.EncodingSubFormat = MMALEncoding.RGB24;

    using (var imgCaptureHandler = new MemoryStreamCaptureHandler())
    using (var renderer = new MMALNullSinkComponent())
    {
        cam.ConfigureCameraSettings(imgCaptureHandler);

        cam.Camera.PreviewPort.ConnectTo(renderer);

        // Camera warm up time
        await Task.Delay(2000);
        await cam.ProcessAsync(cam.Camera.StillPort);

        var data = imgCaptureHandler.CurrentStream.ToArray();
        var width = MMALCameraConfig.Resolution.Width;
        var height = MMALCameraConfig.Resolution.Height;

        using (var bmp = new Bitmap(width, height, PixelFormat.Format24bppRgb))
        {
            var bmpData = bmp.LockBits(new Rectangle(0, 0,
                bmp.Width,
                bmp.Height),
            ImageLockMode.ReadWrite,
            bmp.PixelFormat);

            var pNative = bmpData.Scan0;
            Marshal.Copy(data, 0, pNative, data.Length);

            // Do something with the data.
            // ...

            // Save the modified data. This is saving it to a new array.
            var saveArray = new byte[data.Length];
            Marshal.Copy(pNative, saveArray, 0, data.Length);

            // Unlock bits before disposing.
            bmp.UnlockBits(bmpData);
        }

    }

    cam.Cleanup();
}
Kas-code commented 4 years ago

Trying that example I get: MMALNoSpaceException: Out of resources. Unable to enable component. at MMALSharp.MMALCamera.ConfigureCameraSettings

On the line: cam.ConfigureCameraSettings(imgCaptureHandler);

I have plenty of free space, and running on a RPI 3 with 1GB of memory and nothing else running, so I'm not sure why it's giving that error. By resources, does it mean disk space or memory?

techyian commented 4 years ago

It works OK for me. Sometimes if you've had a bad run of the camera on a previous attempt and tear-down hasn't completed properly you may need to restart your Pi.

Kas-code commented 4 years ago

You were right, after restarting my Pi I no longer get the "out of resources error", but I have another error. I've added a bmp.Save() statement and I always get a "Generic error occurred in GDI+" upon calling Save. I've tried moving the Save statement to before or after the bmp.UnlockBits statement, but I still get the same error.

Kas-code commented 4 years ago

The "Generic error occurred in GDI+" was caused by folder permissions, running using sudo fixed this error. Your above code was giving images with a blue tinge, I had to specify MMALEncoding.BGR24, and then later on when the bitmap is created keep PixelFormat.Format24bppRgb. Full code that is now working great:

public static async Task<Bitmap> LoadBitmap()
{
    MMALCamera cam = MMALCamera.Instance;
    MMALCameraConfig.StillEncoding = MMALEncoding.BGR24;
    MMALCameraConfig.StillSubFormat = MMALEncoding.BGR24;

    using (var imgCaptureHandler = new MemoryStreamCaptureHandler())
    using (var renderer = new MMALNullSinkComponent())
    {
        cam.ConfigureCameraSettings(imgCaptureHandler);
        cam.Camera.PreviewPort.ConnectTo(renderer);

        // Camera warm up time
        await Task.Delay(2000);
        await cam.ProcessAsync(cam.Camera.StillPort);
        var data = imgCaptureHandler.CurrentStream.ToArray();
        var width = MMALCameraConfig.StillResolution.Width;
        var height = MMALCameraConfig.StillResolution.Height;
        var bmp = new Bitmap(width, height, PixelFormat.Format24bppRgb);
        var bmpData = bmp.LockBits(new Rectangle(0, 0,
            bmp.Width,
            bmp.Height),
        ImageLockMode.WriteOnly,
        bmp.PixelFormat);
        var pNative = bmpData.Scan0;
        Marshal.Copy(data, 0, pNative, data.Length);
        bmp.UnlockBits(bmpData);
        return bmp;
    }
}

Note that the bitmap needs to be disposed later on to avoid a memory leak.