Open kutenai opened 4 years ago
Hello,
It's not so easy, you need to write a driver to recognize the camera as a "webcam"
You can start to study the D8M here https://github.com/electro-logic/CameraVision to get same good images and learn how to configure it. Next step will be the driver, you can look for some open-source linux driver for OV8865 as starting point but you need to handle the FPGA/SOC interconnection (ex. maybe you can implement an AXI Stream interface to move the pixel to the ARM processor).
Cheers, Leonardo
Thanks for the rapid feedback Leonardo. I do not think I can tackle this task right now, so I'm looking at other approaches and a different development kit. I have a DE10-Nano, but I also have a CriticalLink Vision Development Kit.
I just need to work out how to get the OpenCL pipeline inserted into the image stream on that kit.
Hi, you don't need to move the pixel to the ARM processor if you only need OpenCL (and you don't need to write a driver). You can write an "RTL function" exposed to OpenCL that work with your camera. Can you explain with more details what do you want to achieve? I don't know the CriticalLink dev kit but to use OpenCL you need an OpenCL BSP that it's not so trivial.
I want to be able to demonstrate an inference engine implemented in the FPGA. So, a live camera feed, process the results through an OpenCL CNN/Yolo3 inference engine and output the results to a format that could be used by a flight controller or some other type of vehicle control unit.
I do not expect the Cyclone V to be very fast, so we have no specific "FPS" target. If we can get a working prototype, we can then explore the option of building a system using a more powerful FPGA, like an Arria 10.
So, this task is really to explore and learn about the process, which includes:
The CriticalLink has a working prototype. The camera input is a Basler 5MP camera. The input to the board is an BLVDS, and there is already VHDL code to convert that data to a vision stream. This vision stream is then put onto an Altera-ST streaming interface, saved to ram, copied from ram, merged with a 'background image', and stramed to an HDMI output.
All of that above is done in the FPGA, without OpenCL. I don't have (yet) the full BSP for the MitySOM Vision Development Kit [https://support.criticallink.com/redmine/projects/5csx_vdk_basler/wiki]
So I need to build a BSP for that board, and include the OpenCL kernel "Freeze Wrapper".
I am currently learning how to build a custom BSP.. and trying to learn more about OpenCL. Your project is just a good starting point since it has the streaming already from USB to OopenCL, then back out..
Now your goal is much more clear.
OpenCL BSP development is not so trivial and (in my opinion) the only feasible option is to start from a reference design of a very similar board and modify it. From scratch it's almost impossible because the documentation is not so detailed, even manufacturers like Terasic start from reference BSP. Both Terasic and Criticallink provide an OpenCL BSP for you boards, why you need to develop a new one?
Once that you have an OpenCL BSP the compiler will merge it automatically with your kernel, this part it's the standard workflow, no problems here
I recommend you to develop on your pc the kernel using only the features supported by the FPGA and only when everything is working you can compile it for the FPGA. Compiling times are super long.
Remember for the D8M you have a lot of control but this means that you need to control the focus motor, gamma, color balance, etc.. to have good images. At the same time it's a good learning experience.
In my project I'm not using OpenCL but only NIOS (a soft-core) to handle the pc-fpga communication. The project is writing a frame from the camera to the SDRAM memory and then when a command is received the NIOS firmware is sending the frame to the PC with JTAG through the USB connector.
You can adapt the project to read the memory with the frame from an OpenCL kernel. The Mipi controller is writing the memory and OpenCL is reading so you need some kind of arbitration. The simplest way could be a simple RTL component that interface to the main OpenCL kernel through a channel that tell when the frame is ready in the memory and when a new frame can be written by the Mipi controller.
CriticalLink provides a reference BSP for their CycloneV System On a Module(SOM). This card is only part of the Vision Development Kit though, but it does include an example that has OpenCL. What is missing is a BSP with support for the VDK AND the OpenCL, so that is what I need to build.
I was looking at the DE10-Nano as an option, but the problem there is that there isn't a BSP with support for the DS8M (?) camera. I could possibly use the DE10-Nano with this reference BSP, and find a suitable "USB" camera for it. I don't have one at the moment though.
I'm not sure how large the OpenCL kernel can be in your BSP, or what the streaming options are into the kernel... as I learn more about BSP development, I'll study your version to at leaset learn more, and I can make a decision then.
Oh, also, for the VDK, the camera board is not a raw sensor, but a sensor board. This board handles the low-level details of the camera sensor, so no need to do a lot of configuration as I'd have to do with the D8M. Also, the VDK has a streaming interface already setup.
The trick is to insert the OpenCL kernel into the streaming interface for the VDK. The other issue is I do not have the actual BSP for the VDK -- I've requested that on the CriticalLink Forum, but no response yet -- to be fair I only asked about that this morning.
So, I'm trying to combine information from the two BSP's.
If you are not super interested in low-level details you can buy a DE10-nano and focus only on the OpenCL kernel using this c5soc_opencl project and connecting a webcam. Why do you want to connect a camera sensor? If you want better quality you can have the best quality connecting through a capture device (ex. something like Elgato Capture 4K but linux compatible) and a mirrorless / reflex camera with a proper lens.
You make a good point. It won't hurt to spend some time with the DE10 and a web-cam, and I might find it is all I need, saving me time to build the BSP for the VDK..
I have a DE10-Nano, and I have a couple of USB cameras. a C930e, and a QuickCam 9000. Both have USB Type-A connectors. I'll need an adaptor or something to connect to the DE10-Nano. Need either USB Type-A to USB-Mini A or USB-Micro A -- not sure which port on the DE1- Nano to connect the USB camera to.
You probably need an USB hub to connect mouse, keyboard and webcam. I don't have the DE10-Nano but usually you can use any USB port connected to the ARM.
That is one reason we want to avoid the USB, we need to use the development kit in a portable application, and cannot plug all of those bits together. That is why I've been using the VDK.
I received some more information about the VDK, so I'll be spending some time working on a "BSP" for that one, as it is ideally suited to our application -- just without an OpenCL BSP.
I greatly appreciate all of your advice and help.
Hub, mouse and keyboard are needed only to ease the development, you can connect one webcam directly to the board when everything is finalized. Portable means you have to think to the power as well and DE10-Nano is really "nano", a good powerbank with a 3d printed case for both can do the job without too much hassle (there is a reference 3d model on the Terasic website) and the cable can be hidden inside the case (depending on your scenario having some cable allow you to position more finely the webcam).
Of course if this is a real project with some low production volume and you want something super portable and fine-tuned you can design a custom PCB using for example the MitySOM module but this is not an easy task because camera sensors are high-speed.
Anyway you know your requirements and if you already have the VDK.. give it a try
You are welcome, Leonardo
What would be required to make this work with the D8M-GPIP terrasic camera?