ivapylibs / detector

Collection of detection methods.
BSD 3-Clause "New" or "Revised" License
1 stars 0 forks source link

inImage depth and ROS #7

Closed pv33 closed 3 years ago

pv33 commented 3 years ago

Create an additional class or class with script, whatever seems reasonable as a first pass that sets up the processing workflow using a ROS publish/subscribe approach. Seems like the second case is the norm (a class instance with a script-based entry point that gets run). The topics should be provided as strings to the class instance and they get setup. One for input of grayscale image, and one for output of the binarized image.

You should both be able to work together to get it done, or one writes and the other confirms.

The input data should come from a short duration ROS bag with the minimal data needed for the test script to function. It's playback will trigger the callbacks that will get the process running.

I am not sure what it should be named. Ideally it would actually be in a parallel ivaROS repository, but I think that creating a sub-package called ROS within this repository should also work out.

Thinking about it, this should probably be kind of general. Instantiation should receive the input topic, the detector instance, and the output topic. The subscriber callback passes the data along to the detector process routine, and then its output gets published. If it can be generalized in this manner, then there might not be a need to put in a sub-package. Just put in the improcessor package at the same level as inImage and call it something reasonable, like ROSwrapper.

Most likely there should be dropping of imagery if new data is available before processing is finished. This option should probably be set. Lower priority for now, but should be figured out soon enough.

Uio96 commented 3 years ago

I have discussed this topic with Yiye. He will redesign his current camera code and write a ROSwrapper. It will receive the input topic, the detector instance, apply the detector methods on the input, and publish the results.

pv33 commented 3 years ago

@Uio96 I understand that Yiye preprocessed the depth data for saving as png and avi video. However, if it is to be processed directly with inRange by loading npz files, there is no need for all of that additional processing.

All preprocessing and clipping should be removed. The focus should be simply on extracting a binary region from depth values lying within a certain range. It is not essential that the hand be 100% captured since the depth values are from an off perspective.l The shorter the script the better. Please trim unnecessary bloat.

This message applies to image04 and image06 scripts.

Uio96 commented 3 years ago

Have removed the depth processing part from the testing scripts.