Open mattdawkins opened 6 years ago
I would assume that image processing and video processing pipelines would differ in terms of component nodes used, so I'm not sure that this is exactly straight forward. E.g. how are image files disseminated to image-specific algorithms, and aggregated later, in a globally uniform way?
I'm not an expert with kwiver/sprokit however.
Also what other toolkits are we comparing to?
I assume you are talking about video processing, where a list of images is treated as if it was a video, and you want to be able to process either a list of images or an encoded video file (e.g. MPG, AVI, etc.) in the same way. This is what the video_input
algorithm is for, and the video_input_image_list
handles image list while vidl_ffmpeg_video_input
handles other video files. Once you have created the video_input
object it will work the same regardless of the type of image source.
What is missing is a factory object of some sort that can figure out which type video_input
to use based on the file name or on introspection of the file contents. Is that what this issue is about?
Currently there is some support for modifying the behaviour of a pipeline using command line parameters on pipeline_runner. Common options: -c [ --config ] FILE File containing supplemental configuration entries. -s [ --setting ] VAR=VALUE additional configuration -I [ --include ] DIR a directory to be added to configuration include
-c config file could be used to include a specific config snippet to configure a video input with a specific algorithm -s can set a config variable such as file/directory name (or anything else) -I may not be that useful in this situation.
These are only examples of how the command line configurability can be used.
On Wed, Jun 27, 2018 at 3:56 PM, Matt Leotta notifications@github.com wrote:
I assume you are talking about video processing, where a list of images is treated as if it was a video, and you want to be able to process either a list of images or an encoded video file (e.g. MPG, AVI, etc.) in the same way. This is what the video_input algorithm is for, and the video_input_image_list handles image list while vidl_ffmpeg_video_input handles other video files. Once you have created the video_input object it will work the same regardless of the type of image source.
What is missing is a factory object of some sort that can figure out which type video_input to use based on the file name or on introspection of the file contents. Is that what this issue is about?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/Kitware/kwiver/issues/550#issuecomment-400809055, or mute the thread https://github.com/notifications/unsubscribe-auth/AGl1PEteFy93HS19Nlo2Kn7aUWrtygBMks5uA-NRgaJpZM4U6Kku .
-- Linus Sherrill - Staff R&D Engineer Kitware, Inc. 28 Corporate Drive Clifton Park, NY 12065-8662 E: linus.sherrill@kitware.com P: 518.881.4400
You should be able to switch on the command line between feeding in an image list, and a video list into a pipeline. As is you need to make separate .pipe files for both which you don't need to do in other toolkits