Currently, you can get an AI image component and and give it an SSD model or a classification model. When you give it a classification model, it doesn't draw anything, it just gives you the data onInference.
Task
Update AILabImage to take a third model "pose", and in the modelInfo, do a proof of concept that it can take blazepose.
Nothing needs to be drawn to the screen, but just like a classification, the results should pop out onInference so we can use it.
Q&A
Q: Why only AILabImage?
A: Because, at the time of this ticket, that is the only component that supports two models.
Q: Just blazepose? Or should it support movenet, too?
A: I'd love for it to support movenet, too, but that should be a stretch goal. Blazepose is the current goal.
Existing:
Currently, you can get an AI image component and and give it an SSD model or a classification model. When you give it a classification model, it doesn't draw anything, it just gives you the data
onInference
.Task
Update
AILabImage
to take a third model "pose", and in the modelInfo, do a proof of concept that it can take blazepose.Nothing needs to be drawn to the screen, but just like a classification, the results should pop out
onInference
so we can use it.Q&A
Q: Why only AILabImage? A: Because, at the time of this ticket, that is the only component that supports two models.
Q: Just blazepose? Or should it support movenet, too? A: I'd love for it to support movenet, too, but that should be a stretch goal. Blazepose is the current goal.
Q: What API wraps the pose models? A: It's pose-detection (https://github.com/tensorflow/tfjs-models/tree/master/pose-detection)