zephyrproject-rtos / zephyr

Primary Git Repository for the Zephyr Project. Zephyr is a new generation, scalable, optimized, secure RTOS for multiple hardware architectures.
https://docs.zephyrproject.org
Apache License 2.0
10.48k stars 6.42k forks source link

drivers: video: API enhancement #72959

Open josuah opened 3 months ago

josuah commented 3 months ago

Introduction

Pursue the work on the video API past the Video 4 Zephyr added to Zephyr in #17194.

Problem description

I would like to implement UVC and enable more complex video pipelines to be written without having to rewrite from scratch a new application every time the devicetree is updated.

Proposed change

Incrementally rework the devicetree and video driver API

Detailed RFC

The existing API leaves corners left for future specification.

The goal is:

Proposed change (Detailed)


remote-endpoint is not used by Zephyr drivers. RTIO can be leveraged to replace the many FIFOs that are typically defined in each driver 1, 2, 3 and provide a way to directly connect drivers while also allowing the application to be interleaved in the pipeline. This would act as a splice(2) for Zephyr video drivers, triggered by the devicetree `remote-endpoint

[UPDATE: it might be possible to use RTIO on top of the current API instead of replacing it]


video_endpoint_id has unclear semantics in some cases (doc fix only) and can be adjusted to allow positive numbers refer an individual endpoint number or address

[UPDATE: a documentation fix, mostly]


Following this, devicetree macros can be introduced to provide ways to refer a particular endpoint without manually coding the endpoint number from the application.


video_get_caps() fills a struct with all the video caps filled at once. This works well, but makes it more difficult than using an enumerator-style API such as what #72254 proposes which allows drivers to be a bit more generic: a device doing software video processing can filter the format capabilities of its sources this way.


video_enqueue(): "an addition to the video API to enable streaming partial frames within one video buffer" is missing. A new API is introduced as part of Arducam Mega sensor support.


Introduce a directory for the video drivers, as image sensors, MIPI/DMA controllers [...] are being mixed in one same driver API. A sensor directory was the chosen way and will be merged once the ongoing PR are completed.


Add a drivers/video_*_skeleton.c in the same style as drivers/usb/udc/udc_skeleton.c that can speed-up development and help with understanding the semantics.


The devicetree can be flattened to enable more uniform processing, such as DT macros used to reduce boilerplate code and homogenize drivers

[UPDATE: no API change needed]


VIDEO_BUF_FRAG: Add fragmentation support to allow partial/fragmented frames being returned from video devices.


video-interfaces.yaml: Introduce common devicetree bindings for all video devices:


Dependencies

Annotated inline above.

Concerns and Unresolved Questions

Alternatives

Leaving-up the application developer resolve these issues is the current approach, and demos and samples submitted #71463 show that it effectively works end-to-end with the current API.

For the case of RTIO, it is also possible to introduce a "genderless" interconnect between source and sinks (so that the application or another driver can use it) by replacing enqueue()/dequeue() by a public FIFO integrated directly in the API.

josuah commented 3 months ago

Extra material to investigate the rationale of the existing APIs, so that their semantics can be kept: https://static.linaro.org/connect/san19/presentations/san19-503.pdf

josuah commented 3 months ago

Thanks to @loicpoulain I rediscovered Gstreamer, and it is great!

The above is also the foundation for a zephyr-shell pipeline, as it is not possible to do it if rewriting the application every time is required. The difficulty, though, is that getting <&phandles> from remote-endpoint at compile-time will not allow to reconnect them later with zephyr-shell.

Maybe it is enough to have "gst-launch pipeline, but all devicetree-made" as a first step...

teburd commented 3 months ago

Right you could treat an rtio_iodev as an endpoint (input or output stream), which is what sensors do today. There's effectively two output streams for sensors, one which is polled, and one which is driven by the sensor itself with an event.

The sensing subsystem provides what amounts to a statically defined pipeline with fanout/fanin possibilities and builds on top of this.

I could imagine video input/output endpoints being similar, though perhaps needing some additional information about image encoding and framesize to describe the metadata.

Image sensors (video) are after all sensors :-)

josuah commented 3 months ago

So let's give it a try! :) I will pick some hardware known to work first and move from here.

To everyone working on a sample or a driver, I wish to avoid disturbing any ongoing effort, so in case I missed an issue, do feel free to ping me about it, I can provide a patch to merge on the PR or delay another PR so that actual driver contribution gets merged first!

josuah commented 3 months ago

Some insightful overview of how libcamera and V4L2 APIs came-up to be: https://archive.fosdem.org/2022/schedule/event/mobile_camera/ Key takeaways (from the slides):

Prior work:

teburd commented 1 month ago

@josuah any progress on doing video with RTIO? Any help needed? I have an interest in seeing this work

josuah commented 1 month ago

@teburd many thanks!

Not direct progress, but still interesting insights:

Where the most help is needed is probably in designing and discussing an API that gracefully fits the <video.h> model. Then I'd be quite glad to try to convert the existing drivers. I started gathering boards for that (NXP's FRDM, Some ST's...).

teburd commented 1 month ago

@teburd many thanks!

Not direct progress, but still interesting insights:

* USB Video Class with a slow CPU: context-switching seems to impact FPS.

This is pretty common, if interrupts are involved you can reduce the cost by using direct interrupts and controlling more when the scheduler gets called into only when you know things need to be rescheduled. E.g. completed entire transfer rather than a little peripheral FIFO ding to do some refilling.

* There are independent parts to `video.h`: control (no change needed?), data (to integrate with RTIO).

* RTIO could be built as a separate API acting as a replacement for enqueue()/dequeue()

Those APIs could be replaced with the read/write ops for RTIO. Ideally we'd benefit then from some common infrastructure around tracing/statistics/timeouts/etc.

* RTIO could be built top of enqueue()/dequeue() at the cost of extra FIFO -> RTIO -> FIFO traffic but useful for migration

It can be, but it'd be an added cost instead of benefit perhaps, but I don't know enough of the video API to make a judgement on that. It's quite possible the API and implementations already do whats needed.

Where the most help is needed is probably in designing and discussing an API that gracefully fits the <video.h> model. Then I'd be quite glad to try to convert the existing drivers. I started gathering boards for that (NXP's FRDM, Some ST's...).

I don't have many boards that support video other than the imxrt1060 evk, which I do have the camera and screen for.

josuah commented 1 month ago

completed entire transfer rather than a little peripheral FIFO ding to do some refilling

Precious insight!

Those APIs could be replaced with the read/write ops for RTIO.

The API introduced by @loicpoulain looks similar to RTIO in practice:

7 drivers do I/O in total:

teburd commented 1 month ago

Sounds like its solved a very similar problem.

Waiting for a completion could be done with a timeout, io_uring has a similar wait_cqe_timeout type API call https://man.archlinux.org/man/io_uring_wait_cqe_timeout.3.en that would be easy to replicate I'm sure.

josuah commented 4 weeks ago

Another challenge is to combine the RTIO and video buffers together.

https://github.com/zephyrproject-rtos/zephyr/blob/6b7558a9bf96fc5569d1b9b73a01f78ce6affe58/include/zephyr/rtio/rtio.h#L232-L260

https://github.com/zephyrproject-rtos/zephyr/blob/6b7558a9bf96fc5569d1b9b73a01f78ce6affe58/include/zephyr/drivers/video.h#L105-L119 with also uint32_t flags; and uint32_t bytesframe introduced in https://github.com/zephyrproject-rtos/zephyr/pull/66994

Is it planned to just wrap the current video struct?

rtio_sqe.userdata = &video_buffer;

Or use RTIO instead of video_buffer mapping the fields like this maybe?

rtio_sqe.buf = video_buffer_buffer;
rtio_sqe.buf_len = video_buffer_size; /* if OP_RX */
rtio_sqe.buf_len = video_buffer_bytesused; /* if OP_TX */
rtio_sqe.userdata = video_buffer_driver_data;
(void *)video_buffer_bytesframe; /* does not fit */
(void *)video_buffer_timestamp; /* does not fit */

Or change the video buffer to only contain what does not fit into RTIO buffers to avoid too much copy?

struct video_buffer_extra_info {
        uint32_t timestamp;
    uint32_t bytesframe;
} video_buffer;

rtio_sqe.userdata = &video_buffer;

Or modify RTIO to support the extra fields so that rtio_sqe can replace video_buffer completely?

If RTIO replaces struct video_buffer, then there would need to handle the allocation too.

Tagging @loicpoulain who introduced the API and contributors/reviewers @ArduCAM @CharlesDias @danieldegrasse @decsny @epc-ake @erwango @ngphibang (alphabetical order) in case anyone is interested in seeing this happen.

teburd commented 4 weeks ago

The way sensing deals with this is by encoding the extra info in the buffer itself and providing functions to get it back. Video can do something similar perhaps. It would require over allocating the buffer itself to account for the metadata and not just the frame data then.

E.g. something like...

rtio_sqe_prep_read(video_input, buf, buf_len);
rtio_submit(r, 1);

struct video_metadata *metadata = video_buf_metadata(buf);
struct video_frame *frames = video_buf_frames(metatadata, buf);
size_t frame_count = video_buf_frame_count(metadata, buf, buf_len);
XenuIsWatching commented 4 weeks ago

I have worked quite a bit with cameras to the point where we made our own "internal" API with cameras due to some short comings with the 'current' implemention of the video api. It's good to see some work on this now to improve it.

Some APIs that I've noticed that are lacking within the current implemention are ways of setting the Physical Link Rate and Type such as CPHY or DPHY along with their rate in sps for CPHY and bps for DPHY. This would be a good feature that could be implemented somehow. Some Cameras may be a bit 'advanced', and be able to stream out multiple frame types out at once such as a IR image along with a RGB image, I'm not sure how that would be currently implemented to select which frame type to enable or set a ctrl for. Maybe enum video_endpoint_id ep could be used for it?

josuah commented 4 weeks ago

@XenuIsWatching

ways of setting the Physical Link Rate and Type such as CPHY or DPHY along with their rate in sps for CPHY and bps for DPHY

I think this PR is addressing this, leveraging port { endpoint { link-properties-here; }; };:

stream out multiple frame types out at once such as a IR image along with a RGB image

This PR would allow specify endpoint numbers in addition to the generic VIDEO_EP_IN/OUT/ANY/NONE.

I will comment on each PR...

teburd commented 4 weeks ago

Sensing deals with multiplexed fifo buffers today and could be used as a model to follow perhaps. Or you could have multiple output streams, each one acting like an IO device (struct rtio_iodev) by moving to that API.

josuah commented 3 weeks ago

@teburd

I did not think about placing the metadata in the buffer! I think all elements are in place and I will be able to test now.

It would require over allocating the buffer itself to account for the metadata and not just the frame data then.

This could be integrated into the video allocation functions then.

Sensing deals with multiplexed fifo buffers today and could be used as a model to follow perhaps. Or you could have multiple output streams, each one acting like an IO device (struct rtio_iodev) by moving to that API.

Time for experimentation. Thank you.

teburd commented 3 weeks ago

I have worked quite a bit with cameras to the point where we made our own "internal" API with cameras due to some short comings with the 'current' implemention of the video api. It's good to see some work on this now to improve it.

Some APIs that I've noticed that are lacking within the current implemention are ways of setting the Physical Link Rate and Type such as CPHY or DPHY along with their rate in sps for CPHY and bps for DPHY. This would be a good feature that could be implemented somehow. Some Cameras may be a bit 'advanced', and be able to stream out multiple frame types out at once such as a IR image along with a RGB image, I'm not sure how that would be currently implemented to select which frame type to enable or set a ctrl for. Maybe enum video_endpoint_id ep could be used for it?

Right, this is general an issue in Zephyr today because we don't have a way of doing device specific behaviors akin to ioctl. Sensing sort of deals with this with its attributes API but there are still oddities with it. Because each call to set attr is a partial reconfiguration of the device, and ordering can matter, there can be invalid configurations.

For example many devices offer low power modes which will toggle on/off the mems device at the cost of noise. Usually these modes are limited in sampling rate. Sometimes the sample rate overlaps, but frequently some sample rates only work in a low noise (always on) mode.

So now you have this quirk of ordering where you may wish to change both the power mode and sample rate at the same time, but the API has no way of allowing this. So drivers then have to work out what the implied meaning of a sample rate setting may mean.

There's also the dai interface which fully gave up on trying to provide structured configuration and takes a void *config defined by each dai implementation.