Additional problem to solve: would it be more efficient to let tensorflow_lite (instead of tensor_filter / tensor_filter_tensorflow_lite) to "allocate" output buffer? If yes, we need to add an interface between tensor_filter(main) and tensorfilter* for output buffer management.
Need understandings on both tf-lite and gst. (2 people / <2 month for "1.0")
Need to "guarantee" no-memcpy-ness.
Verify APIs of tensor_filter::main, suggest revises.
Create TF-lite models for test cases.
(later) Need to provide performance measurement standard for NNFW/models.
(later) Need to provide performance recommendations for NNFW usages.
(later) May keep working on other NNFW (tensorflow, caffe, caffe2, ...) later.
[x] CI / integration test (not unit test) (1 people / <2 month for "1.0") ( @sewon-oh )
Issue by myungjoo-ham Thursday Jun 07, 2018 at 01:59 GMT Originally opened as https://github.sec.samsung.net/STAR/nnstreamer/issues/64
Full Time Tasks. (to be allocated / taken): Proposed Tasks for @jy1210-jung @jinhyuck83-park @sewon-oh @hello-ahn
[x] tensor_filter_tensorflow_lite ( @jinhyuck83-park @hello-ahn)
[x] CI / integration test (not unit test) (1 people / <2 month for "1.0") ( @sewon-oh )
[x] tensor_sink (app sink) (1 people / <2 month for "1.0") ( @jy1210-jung )
201807 Goal: test with a straight stream. (video-src --> t_convert --> t_filter (tensorflow-lite) --> t_sink )
Part Time Tasks. (do this as a side job)
Not urgent but important tasks.
Postponed (later) tasks