Closed feketerigo96 closed 4 years ago
Hi, I hope that a general guidance will suffice here:
Compile vilib into a shared library with make solib
.
Create a normal executable (that you link with our shared library), in which: a) You grab your camera's frame: depending on your camera, you should be able to achieve this with v4l2, or your custom driver - there are many examples online showing you how to achieve this. b) As the algorithms presented in the paper tackle grayscale images - you need your image in grayscale representation, hence you might need an YUYV/RGB(A) to Grayscale conversion if this is not the case c) As you did not really specify whether you want to do feature detection or tracking, I'd advise you to consult how the feature detection/tracking is initialized and follow:
Once you have your initial version running, you could try loading the image directly into the 0th level of the image pyramid in device memory to omit the host > device transfer.
Thanks,that helps a lot.
I want to test vilib on my mobile robot ,but the test binary is based on Euroc dataset. Can you tell me how to make a live webcam test?