Closed augustebleu closed 3 years ago
Greetings
Our development effort is currently focused on OVC5, which is based on Zynq UltraScale+. We decided to move away from Argus/NVIDIA precisely due to latency issues as you mentioned, for which we could not seem to find a solution or workaround. Our current Zynq-based design overlaps image reception with image transmission to the USB3 host, resulting in sub-frame latency. We built the prototype OVC5 using frankenstein devkits, but a custom board is currently in fabrication, as shown here: https://github.com/osrf/ovc/tree/master/ovc5
As for integrating new camera types, we have deliberately used the 2-lane FFC MIPI pinout made popular by Raspberry Pi (and friends) to make that as easy as possible. It looks like that Arducam module also uses the same pinout, so it should "just work", at least electrically. Software-wise, you'll need to chase down the required register settings. This can range from "easy" to "very difficult" depending on imager documentation access and/or available open-source code to follow along.
Cheers
Hi!
To add to a few points:
* Do you consider this version mature enough to be used in the specified conditions (six picam synchronized with an IMU)?
You should be able to get six picams (I only tried 2 personally) but they won't be synchronized. Picam v2 (or even the HQ one if I remember correctly) doesn't have a dedicated pin for hardware synchronization.
* Have you noticed any particular issues during development or things you would like to see addressed in future releases (I am very curious to understand what drives your design preferences in each version)?
The designs have usually been driven by requirements of people we worked with, for example the first prototypes were for drone applications, hence global shutter imagers and full integration (a single PCB with imagers / computational device) were desirable to reduce weight and size.
For OVC4 / OVC5 we are trying to make a prototyping platforms that can be used with as many easily acquirable sensors as possible. As @codebot mentioned the Picam pinout is as close as a standard in MIPI FFC connectors we will get, with both Raspberry PIs and Jetson platforms using it a lot of vendors provide sensors with that pinout.
The code is also structured to allow flexibility, with a SensorManager
object that probes all the I2C buses at runtime and detects which imager has been connected, then proceeds to configure them through imager specific instructions. Previous OVCs were all "locked" into a single imager.
* It seems you are using the Argus sdk from Nvidia to fetch the image frames, did you observe any latency? (I had this kind of trouble with a Jetson TX2 & Argus in the past)
Yes again as mentioned this was the main reason we moved away from the platform, depending on the configuration there was a 2-3 frames latency (compared to less than one frame in OVC5), which is too high for many robotics applications.
* How difficult would it be to modify the project to integrate a camera such as the OV9281 1MP from Arducam?
As @codebot mentioned, if they are electrically compatible, the main difficulty is in finding the configuration since imagers are usually a very protected market. As shown in the header's copyright notice, The picam driver was developed starting from the NVIDIA Linux kernel fork, where they have a header file with a list of sensor modes (resolution / FPS) and the corresponding sets of I2C instructions. I had a quick look at their kernel fork and I could find a similar file for OV9281, which makes me think the integration should be reasonably simple.
Anyway since the code for OVC4 / OVC5 is very similar (even though the underlying hardware is wildly different), most of what I wrote about flexibility and how to develop a driver applies to both.
@codebot @luca-della-vedova Your answers are pretty clear and well explained, thank you!
I have a better understanding of the intent of recent developments. I was able to clone the OVC3 and it is a good example of a very versatile device.
I have another question for you. In my particular case I would like to take advantage of a platform with a GPU for a robotic application. My goals are also to use a global shutter camera and to reduce the latency as much as possible. That's why I will look towards the OVC2.
Are there similar challenges observed while designing the OVC2 (delay, design flaws, etc.)?
Sorry if we are going off topic, but I think that this information could be useful to others as well.
The reason we moved to offboard camera modules in OVC4, OVC5 was that it's a huge benefit to be able to mix-and-match imager chips later and to experiment with and modify the baseline by changing how the sensor boards are mechanically mounted.
OVC1, OVC2, OVC3 all use image sensors that are soldered to the board. This is convenient (no cables), but if you ever want to change either the image sensor or the baseline... you can't.
I'd strongly suggest going with OVC5 and streaming data to an offboard GPU via USB-C. Embedded GPU's are neat, but nothing beats a "real" GPU on a full-size PCIe card, if you have the space. Note that embedded GPU's like NVIDIA TX2 generally use a custom socket pinout, so you can't really upgrade it, unless the GPU module vendor happens to produce an upgraded module that keeps the connector identical. That has happened "sometimes" in the past, but it's totally out of your control. In contrast, a USB-C connection to a host is industry-standard, and you're guaranteed to be able to get a faster GPU next year.
Your explanations are quite useful. I am closing the subject since the question has been answered. Thank you.
First of all, I would like to thank the whole team for the great work done on this project. I intend to reproduce the OVC4, I would like to get some feedback on the project.