Closed NickAustin82 closed 3 years ago
We are looking for a new maintainer, apply at https://adoptoposs.org/p/9f5b74b9-04f2-42b6-891f-c5294c9ef1c5
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
I have been tasked to write a program that will run on a raspberry pi, display the video feed from the pi camera (ideally on a WinForms panel – I am using Mono on the pi), and overlay shapes on top (namely an artificial horizon that will be calculated from the readings of a 3D motion click sensor attached to the GPIO pins.
Since most of the programming I do these days is with .NET, I thought that I would try to make use of this RaspberryIO library, since at first glance it looks like it contains everything I need. However, I am having difficulties with the first task… displaying the video feed.
My question regards the Pi.Camera.OpenVideoStream function, specifically the byte array data object passed to the call back function. I was hoping that this would contain the bytes being read by the camera, and that as with other camera applications I have built on Windows (and the data array object returned by Pi.Camera.CaptureStillImage) I would be able to create a Bitmap/Image object from them for each frame, which I could then display in the panel.
Unfortunately, the byte array only has a length of 2048 (as opposed to 6.2 million as I would expect for a RGB24 1920x1080 image). Even if this image is compressed, 2048 bytes seems impossible!).
I guess this is something to do with the H264 video format, and the data object isn’t what I assumed it would be at all – i.e. it is not ‘a frame’? I have tried various online examples to decode it, but have failed.
I have also seen some examples of this library method being used where on each frame the bytes in the array are added to a list. When the user stops the stream, the data in the list is then converted to an MP4 file. This is great, but not really what I want to do (and is easily achievable from the terminal of the raspberry pi anyway).
Is it possible to do what I want to do? Am I heading in the right direction? Is it possible to access the image bytes in the memory?
I see so many examples online (in an array of different languages) of how to do much more complicated things… like streaming the camera feed over a network. There’s very little on displaying the feed in an application window running on the same pi.
Thank you so much for your help
Kind regards, Nick