icatcherplus / icatcher_plus

iCatcher+: Robust and automated annotation of infant gaze from videos collected in laboratory, field, and online studies
GNU General Public License v3.0
17 stars 17 forks source link

Unclear instructions for webapp, and speeding up the process #76

Closed yoterel closed 7 months ago

yoterel commented 7 months ago

On behalf of anonymous user:

We have already been able to install iCatcher+ using the terminal. Nevertheless, we're struggling to upload the data to the Web App. As the instructions on GitHub are not very informative, we don't know if we need to load a labels.txt empty file or one with the annotations that have been done previously. At the same time, it is not very clear if we must upload the whole video or if we have to split it into a frame-by-frame format. If the latter is the case, we are not sure whether we have to label each frame (image) before uploading it to the iCatcher+ App (as this doesn't make much sense to us).

Finally, do you know of an App or Software that could help us speed up the process as we got quite a bunch of videos that require being handled with.

yoterel commented 7 months ago

Hi,

As the instructions on GitHub are not very informative

I apologize for this :) I read them myself and I agree they aren't clear enough. I will fix this.

Let me explain the work flow to use the web app (which can currently be used to observe the data only, but in next versions will allow editing).

  1. You have a video or a folder of videos you wish to annotate automatically.
  2. You run iCatcher+ on that folder with the --ui_packaging_path option, preferably on a machine that has a GPU available so you can benefit from better performant models and accelerate the process.
  3. The result is a folder (or multiple folders, if you ran iCatcher+ on a folder of videos) that contains a file named "labels.txt", filled with the annotations, and a folder named "decorated_frames" filled with all the frames of the video as image files.

You can then proceed to run icatcher in the app mode (icatcher --app) and it will prompt you to upload a directory (it is a local upload, nothing is being sent anywhere). You select the one you just got from 3, and then you can use the UI.

it is not very clear if we must upload the whole video or if we have to split it into a frame-by-frame format

Neither ! as explained above, this is autpmatically done for you if you pass the --ui_packaging_path option. You must supply this flag with a valid path though for example:

icatcher "/path/to/some/video.mp4/or/folder/of/videos --ui_packaging_path /path/to/some/output/folder/

App or Software that could help us speed up the process

Please view this web page , and scroll down the GPU track (I think you would benefit from reading all if it though) as I mentioned before, there are ways to accelerate the annotation process significantly depending on your requirements. I would start by writing a script to automate the process...

If something keeps not working, can you post here the python version you are using, and the command line options you use to launch iCatcher+ ? then I can help further.

yoterel commented 7 months ago

FYI: I have created a pull request to deal with the slightly confusing flags and README. Once it is approved, the new version (0.2.2) will have a much more clearer way of producing output for the ui.

yoterel commented 7 months ago

I am closing this issue, as I haven't received any response. Feel free to reopen if you have any queries.