Closed arvindsaraf closed 1 year ago
regarding gestures, yes, that seems correct.
but i would not do any of the rest in the post-detection logic, i'd do it in detection logic itself that you can modify as needed. reason is that you want to plug in where it makes most sense - for example, if you're going to run custom emotion model, you want it to run on a already prepared face tensor, not get result, crop face, convert cropped image to tensor and then execute the model.
now, can you just "plug in" a custom emotion model? most likely no as each model has different input and output values which need to be parsed in an execution module.
there are high-level docs available at: https://github.com/vladmandic/human/wiki/Module
and in the future, please create a question under discussions, not an issue.
Issue Description Hi - I want to use a custom Emotion model, Gestures & hit some observability APIs for some outliers. Trying to figure out the best ways to do so using this package. Some specific questions:
human.video(inputVideo); // start detection loop which continously updates results postDetect(); // start postDetect loop `
Does the above look correct? Thanks