techyian / MMALSharp

C# wrapper to Broadcom's MMAL with an API to the Raspberry Pi camera.
MIT License
195 stars 33 forks source link

How Do I Pipe Video Into Emgu (OpenCV)? #117

Closed d8ahazard closed 4 years ago

d8ahazard commented 4 years ago

Hey there!

So, I created a project in Python using OpenCV and PiCamera to capture video, process it, and do things with it in real time.

I'm now porting my app to C#, and looking at this for a replacement.

Could you point me to a sample where I can look at looping over the video and storing the frame in memory as an array for access by calls from another class?

Basically, I want to wrap the video receiver in a class, call a "start" function to initiate capture, and then reference a Frame attribute of that class whenever I want to grab the next video frame.

techyian commented 4 years ago

Hey,

It would be helpful to see how you're currently using picamera so I can give you a good alternative example. Are you using the camera's video port to capture image stills at a rapid rate (using an image encoder component)? Or are you using the video port to capture raw, unencoded video and passing that to OpenCV?

d8ahazard commented 4 years ago

I'm about halfway there already...

I'm effectively trying to implement these two guides:

https://www.pyimagesearch.com/2015/12/28/increasing-raspberry-pi-fps-with-python-and-opencv/

https://www.pyimagesearch.com/2016/01/04/unifying-picamera-and-cv2-videocapture-into-a-single-class-with-opencv/

End result is I call a class for capturing the video. User preference determines if it uses picam or USB webcam.

Class opens the pointer to the camera, starts a loop over the raw video frames, saves it as either a Mat or Image object? Seems Mat is the appropriate obj for this case...I could be wrong tho. :P

On Sun, Jan 12, 2020 at 8:36 AM Ian Auty notifications@github.com wrote:

Hey,

It would be helpful to see how you're currently using picamera so I can give you a good alternative example. Are you using the camera's video port to capture image stills at a rapid rate (using an image encoder component)? Or are you using the video port to capture raw, unencoded video and passing that to OpenCV?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/techyian/MMALSharp/issues/117?email_source=notifications&email_token=AAMO4NBXPRCXLRGCR77X5JLQ5MTGJA5CNFSM4KFV7IZ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEIW3NGA#issuecomment-573421208, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAMO4NBA656HO3DM4H4JYFLQ5MTGJANCNFSM4KFV7IZQ .

d8ahazard commented 4 years ago

And here's what the code actually does. ;)

https://www.youtube.com/watch?v=TtUne3SFc_U&t=2s

On Sun, Jan 12, 2020 at 9:46 AM Ben K. d8ahazard@gmail.com wrote:

I'm about halfway there already...

I'm effectively trying to implement these two guides:

https://www.pyimagesearch.com/2015/12/28/increasing-raspberry-pi-fps-with-python-and-opencv/

https://www.pyimagesearch.com/2016/01/04/unifying-picamera-and-cv2-videocapture-into-a-single-class-with-opencv/

End result is I call a class for capturing the video. User preference determines if it uses picam or USB webcam.

Class opens the pointer to the camera, starts a loop over the raw video frames, saves it as either a Mat or Image object? Seems Mat is the appropriate obj for this case...I could be wrong tho. :P

On Sun, Jan 12, 2020 at 8:36 AM Ian Auty notifications@github.com wrote:

Hey,

It would be helpful to see how you're currently using picamera so I can give you a good alternative example. Are you using the camera's video port to capture image stills at a rapid rate (using an image encoder component)? Or are you using the video port to capture raw, unencoded video and passing that to OpenCV?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/techyian/MMALSharp/issues/117?email_source=notifications&email_token=AAMO4NBXPRCXLRGCR77X5JLQ5MTGJA5CNFSM4KFV7IZ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEIW3NGA#issuecomment-573421208, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAMO4NBA656HO3DM4H4JYFLQ5MTGJANCNFSM4KFV7IZQ .

techyian commented 4 years ago

I think the example you're looking for is here. However, for your project I don't think you'll need all 4 splitter ports, and you'll also need to subclass InMemoryCaptureHandler in order to hook onto the Process method.

Something like the below might get you started, obviously you'll need to add the relevant EmguCV bits:

public class EmguInMemoryCaptureHandler : InMemoryCaptureHandler, IVideoCaptureHandler
{
    public override void Process(byte[] data, bool eos)
    {
        // The InMemoryCaptureHandler parent class has a property called "WorkingData". 
        // It is your responsibility to look after the clearing of this property.

        // The "eos" parameter indicates whether the MMAL buffer has an EOS parameter, if so, the data that's currently
        // stored in the "WorkingData" property plus the data found in the "data" parameter indicates you have a full image frame.

        // I suspect in here, you will want to have a separate thread which is responsible for sending data to EmguCV for processing?
        Console.WriteLine("I'm in here");

        base.Process(data, eos);

        if (eos)
        {
            this.WorkingData.Clear();
            Console.WriteLine("I have a full frame. Clearing working data.");
        }
    }

    public void Split()
    {
        throw new NotImplementedException();
    }
}

public async Task TakeRawVideo()
{
    // By default, video resolution is set to 1920x1080 which will probably be too large for your project. Set as appropriate using MMALCameraConfig.VideoResolution
    // The default framerate is set to 30fps. You can see what "modes" the different cameras support by looking:
    // https://github.com/techyian/MMALSharp/wiki/OmniVision-OV5647-Camera-Module
    // https://github.com/techyian/MMALSharp/wiki/Sony-IMX219-Camera-Module            
    using (var vidCaptureHandler = new EmguInMemoryCaptureHandler())
    using (var splitter = new MMALSplitterComponent())
    using (var renderer = new MMALNullSinkComponent())
    {                
        cam.ConfigureCameraSettings();

        // We are instructing the splitter to do a format conversion to BGR24.
        var splitterPortConfig = new MMALPortConfig(MMALEncoding.BGR24, MMALEncoding.BGR24, 0, 0, null);

        // By default in MMALSharp, the Video port outputs using proprietary communication (Opaque) with a YUV420 pixel format.
        // Changes to this are done via MMALCameraConfig.VideoEncoding and MMALCameraConfig.VideoSubformat.                
        splitter.ConfigureInputPort(new MMALPortConfig(MMALEncoding.OPAQUE, MMALEncoding.I420), cam.Camera.VideoPort, null);

        // We then use the splitter config object we constructed earlier. We then tell this output port to use our capture handler to record data.
        splitter.ConfigureOutputPort<SplitterVideoPort>(0, splitterPortConfig, vidCaptureHandler);

        cam.Camera.PreviewPort.ConnectTo(renderer);
        cam.Camera.VideoPort.ConnectTo(splitter);

        // Camera warm up time
        await Task.Delay(2000).ConfigureAwait(false);

        // Record for 10 seconds. Increase as required.
        var cts = new CancellationTokenSource(TimeSpan.FromSeconds(10));

        await cam.ProcessAsync(cam.Camera.VideoPort, cts.Token);
    }
}

I've added some comments to hopefully clear up what's happening in this example. I hope that helps a bit? Let me know how you get on.

d8ahazard commented 4 years ago

So, I think I came up with something on my own that works, but haven't actually tried it yet. Care to take a look?

`using System; using System.Threading; using System.Threading.Tasks; using Emgu.CV; using Emgu.CV.Structure; using MMALSharp; using MMALSharp.Common.Utility; using MMALSharp.Handlers;

namespace HueDream.Models.DreamVision { public class PiVideoStream : IVideoStream, System.IDisposable { private MMALCamera cam; private Image<Bgr, byte> frame; public PiVideoStream() { cam = MMALCamera.Instance;
MMALCameraConfig.VideoResolution = new Resolution(800, 600); cam.ConfigureCameraSettings(); }

    public async Task Start(CancellationToken ct) {
        using (var vidCaptureHandler = new InMemoryCaptureHandler()) {
            frame = new Image<Bgr, byte>(800, 600);
            while (!ct.IsCancellationRequested) {
                await cam.TakeVideo(vidCaptureHandler, CancellationToken.None);
                var bytes = vidCaptureHandler.WorkingData;
                frame.Bytes = bytes.ToArray();
            }
        }
        cam.Cleanup();
    }

    public Image<Bgr, byte> GetFrame() {
        return frame;
    }

    #region IDisposable Support
    private bool disposedValue = false;

    protected virtual void Dispose(bool disposing) {
        if (!disposedValue) {
            if (disposing) {
                cam.Cleanup();
            }
            disposedValue = true;
        }
    }

    public void Dispose() {
        Dispose(true);
        GC.SuppressFinalize(this);
    }
    #endregion
}

} `

The idea being that I initialize the camera when I instantiate the class, then fire a loop that updates the value of "frame" with the current video frame, and then I call "getFrame" as needed for current frame data.

Do I need to incorporate the eos check in this somehow?

d8ahazard commented 4 years ago

Grr, sorry for the sloppy code...

techyian commented 4 years ago

There's a couple of things which I can see here.

1) In your Start method, you're relying on TakeVideo() source. When taking videos, this method, and the ProcessAsync method which is subsequently called by this, will not return until your cancellation token has expired. Considering you're using the InMemoryCaptureHandler, you're soon going to starve your program of the RAM its allocated. 2) The TakeVideo() helper method captures H.264 video using a YUV420 pixel format. As you're wanting raw image frames encoded as BGR24/RGB24, this method won't be suitable for use with EmguCV so you'll need to setup a manual pipeline to capture raw image frames.

You need to hook onto the callbacks made to the capture handler's Process method in order to receive the image data as it's being processed. As I mentioned in my previous comment, you will want to subclass the InMemoryCaptureHandler class and do your processing to EmguCV in there. You could also make use of the callback handler functionality but I think that's probably overkill here.

Could something like the below work? I haven't tested this code by the way, but I hope it will get you on the right track:

public class EmguEventArgs : EventArgs
{
    public byte[] ImageData { get; set; }
}

public class EmguInMemoryCaptureHandler : InMemoryCaptureHandler, IVideoCaptureHandler
{
    public event EventHandler<EmguEventArgs> MyEmguEvent;

    public override void Process(byte[] data, bool eos)
    {
        // The InMemoryCaptureHandler parent class has a property called "WorkingData". 
        // It is your responsibility to look after the clearing of this property.

        // The "eos" parameter indicates whether the MMAL buffer has an EOS parameter, if so, the data that's currently
        // stored in the "WorkingData" property plus the data found in the "data" parameter indicates you have a full image frame.

        // I suspect in here, you will want to have a separate thread which is responsible for sending data to EmguCV for processing?
        Console.WriteLine("I'm in here");

        base.Process(data, eos);

        if (eos)
        {
            this.MyEmguEvent(this, new EmguEventArgs { ImageData = this.WorkingData.ToArray() });

            this.WorkingData.Clear();
            Console.WriteLine("I have a full frame. Clearing working data.");
        }
    }

    public void Split()
    {
        throw new NotImplementedException();
    }
}

public async Task TakeRawVideo()
{
    MMALCameraConfig.VideoResolution = new Resolution(800, 600);

    // By default, video resolution is set to 1920x1080 which will probably be too large for your project. Set as appropriate using MMALCameraConfig.VideoResolution
    // The default framerate is set to 30fps. You can see what "modes" the different cameras support by looking:
    // https://github.com/techyian/MMALSharp/wiki/OmniVision-OV5647-Camera-Module
    // https://github.com/techyian/MMALSharp/wiki/Sony-IMX219-Camera-Module            
    using (var vidCaptureHandler = new EmguInMemoryCaptureHandler())
    using (var splitter = new MMALSplitterComponent())
    using (var renderer = new MMALNullSinkComponent())
    {
        cam.ConfigureCameraSettings();

        // Register to the event.
        vidCaptureHandler.MyEmguEvent += this.OnEmguEventCallback;

        // We are instructing the splitter to do a format conversion to BGR24.
        var splitterPortConfig = new MMALPortConfig(MMALEncoding.BGR24, MMALEncoding.BGR24, 0, 0, null);

        // By default in MMALSharp, the Video port outputs using proprietary communication (Opaque) with a YUV420 pixel format.
        // Changes to this are done via MMALCameraConfig.VideoEncoding and MMALCameraConfig.VideoSubformat.                
        splitter.ConfigureInputPort(new MMALPortConfig(MMALEncoding.OPAQUE, MMALEncoding.I420), cam.Camera.VideoPort, null);

        // We then use the splitter config object we constructed earlier. We then tell this output port to use our capture handler to record data.
        splitter.ConfigureOutputPort<SplitterVideoPort>(0, splitterPortConfig, vidCaptureHandler);

        cam.Camera.PreviewPort.ConnectTo(renderer);
        cam.Camera.VideoPort.ConnectTo(splitter);

        // Camera warm up time
        await Task.Delay(2000).ConfigureAwait(false);

        // Record for 10 seconds. Increase as required.
        var cts = new CancellationTokenSource(TimeSpan.FromSeconds(10));

        await cam.ProcessAsync(cam.Camera.VideoPort, cts.Token);
    }
}

protected virtual void OnEmguEventCallback(object sender, EmguEventArgs args)
{
    Console.WriteLine("I'm in OnEmguEventCallback.");

    var frame = new Image<Bgr, byte>(800, 600);
    frame.Bytes = args.ImageData;

    // Do something with the image data...
}
d8ahazard commented 4 years ago

I'm getting an error importing the iVideoCaptureHandler, I see it was edited/added recently, is it in the latest package on NuGet?

On Mon, Jan 13, 2020 at 4:01 AM Ian Auty notifications@github.com wrote:

There's a couple of things which I can see here.

  1. In your Start method, you're relying on TakeVideo() source https://github.com/techyian/MMALSharp/blob/dev/src/MMALSharp/MMALCamera.cs#L90. When taking videos, this method, and the ProcessAsync method which is subsequently called by this, will not return until your cancellation token has expired. Considering you're using the InMemoryCaptureHandler, you're soon going to starve your program of the RAM its allocated.
  2. The TakeVideo() helper method captures H.264 video using a YUV420 pixel format. As you're wanting raw image frames encoded as BGR24/RGB24, this method won't be suitable for use with EmguCV so you'll need to setup a manual pipeline to capture raw image frames.

You need to hook onto the callbacks made to the capture handler's Process method in order to receive the image data as it's being processed. As I mentioned in my previous comment, you will want to subclass the InMemoryCaptureHandler class and do your processing to EmguCV in there. You could also make use of the callback handler functionality but I think that's probably overkill here.

Could something like the below work? I haven't tested this code by the way, but I hope it will get you on the right track:

public class EmguEventArgs : EventArgs { public byte[] ImageData { get; set; } } public class EmguInMemoryCaptureHandler : InMemoryCaptureHandler, IVideoCaptureHandler { public event EventHandler MyEmguEvent;

public override void Process(byte[] data, bool eos)
{
    // The InMemoryCaptureHandler parent class has a property called "WorkingData".
    // It is your responsibility to look after the clearing of this property.

    // The "eos" parameter indicates whether the MMAL buffer has an EOS parameter, if so, the data that's currently
    // stored in the "WorkingData" property plus the data found in the "data" parameter indicates you have a full image frame.

    // I suspect in here, you will want to have a separate thread which is responsible for sending data to EmguCV for processing?
    Console.WriteLine("I'm in here");

    base.Process(data, eos);

    if (eos)
    {
        this.MyEmguEvent(this, new EmguEventArgs { ImageData = this.WorkingData.ToArray() });

        this.WorkingData.Clear();
        Console.WriteLine("I have a full frame. Clearing working data.");
    }
}

public void Split()
{
    throw new NotImplementedException();
}

} public async Task TakeRawVideo() { MMALCameraConfig.VideoResolution = new Resolution(800, 600);

// By default, video resolution is set to 1920x1080 which will probably be too large for your project. Set as appropriate using MMALCameraConfig.VideoResolution
// The default framerate is set to 30fps. You can see what "modes" the different cameras support by looking:
// https://github.com/techyian/MMALSharp/wiki/OmniVision-OV5647-Camera-Module
// https://github.com/techyian/MMALSharp/wiki/Sony-IMX219-Camera-Module
using (var vidCaptureHandler = new EmguInMemoryCaptureHandler())
using (var splitter = new MMALSplitterComponent())
using (var renderer = new MMALNullSinkComponent())
{
    cam.ConfigureCameraSettings();

    // Register to the event.
    vidCaptureHandler.MyEmguEvent += this.OnEmguEventCallback;

    // We are instructing the splitter to do a format conversion to BGR24.
    var splitterPortConfig = new MMALPortConfig(MMALEncoding.BGR24, MMALEncoding.BGR24, 0, 0, null);

    // By default in MMALSharp, the Video port outputs using proprietary communication (Opaque) with a YUV420 pixel format.
    // Changes to this are done via MMALCameraConfig.VideoEncoding and MMALCameraConfig.VideoSubformat.
    splitter.ConfigureInputPort(new MMALPortConfig(MMALEncoding.OPAQUE, MMALEncoding.I420), cam.Camera.VideoPort, null);

    // We then use the splitter config object we constructed earlier. We then tell this output port to use our capture handler to record data.
    splitter.ConfigureOutputPort<SplitterVideoPort>(0, splitterPortConfig, vidCaptureHandler);

    cam.Camera.PreviewPort.ConnectTo(renderer);
    cam.Camera.VideoPort.ConnectTo(splitter);

    // Camera warm up time
    await Task.Delay(2000).ConfigureAwait(false);

    // Record for 10 seconds. Increase as required.
    var cts = new CancellationTokenSource(TimeSpan.FromSeconds(10));

    await cam.ProcessAsync(cam.Camera.VideoPort, cts.Token);
}

} protected virtual void OnEmguEventCallback(object sender, EmguEventArgs args) { Console.WriteLine("I'm in OnEmguEventCallback.");

var frame = new Image<Bgr, byte>(800, 600);
frame.Bytes = args.ImageData;

// Do something with the image data...

}

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/techyian/MMALSharp/issues/117?email_source=notifications&email_token=AAMO4NCP5MRMYYNEA6O3JXDQ5Q3YDA5CNFSM4KFV7IZ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEIYDYJI#issuecomment-573586469, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAMO4NEM6PPR45QKRPLOS33Q5Q3YDANCNFSM4KFV7IZQ .

techyian commented 4 years ago

No, I'd recommend you use the dev branch as it's had a number of improvements added since v0.5.1. The general release for v0.6 is coming very soon. You can either clone the source, or grab it from MyGet.

techyian commented 4 years ago

Hi, just checking whether this has resolved your issue? Am I ok to close the ticket?

d8ahazard commented 4 years ago

Hi, sorry to keep it open so long. I'm just working through issues in working with the camera on Windows first, then I will deploy to pi and test there. Hopefully I can let you know in a day or two.

On Sat, Jan 18, 2020, 3:55 PM Ian Auty notifications@github.com wrote:

Hi, just checking whether this has resolved your issue? Am I ok to close the ticket?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/techyian/MMALSharp/issues/117?email_source=notifications&email_token=AAMO4NGIKS333BIDRAIVQ2LQ6N3DVA5CNFSM4KFV7IZ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEJKCZJA#issuecomment-575941796, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAMO4NG3XXNMLHC3LDOKVQDQ6N3DVANCNFSM4KFV7IZQ .