At a quick glance the bulk of this proposal appears to be geared towards "expression tracking" and not so much "face tracking". In fact, the bulk of what I'd actually call "face tracking" (the position of the face and maybe a mesh) is explicitly called out as out-of-scope for this by your explainer. Similar to the "Raw Camera Access" and "Image Tracking" work that we had earlier, where one could be based off of the other, I'd prefer if we reserved "face tracking" (as both a spec name and a feature string) for a lower-level API interface that exposed either face positioning or a full face mesh where features like this could be inferred from it, even if we don't have the intentions to work on such features right now, rather than "reserving" a "lower-level" name for something that feels like a bit "higher-level" API.
At a quick glance the bulk of this proposal appears to be geared towards "expression tracking" and not so much "face tracking". In fact, the bulk of what I'd actually call "face tracking" (the position of the face and maybe a mesh) is explicitly called out as out-of-scope for this by your explainer. Similar to the "Raw Camera Access" and "Image Tracking" work that we had earlier, where one could be based off of the other, I'd prefer if we reserved "face tracking" (as both a spec name and a feature string) for a lower-level API interface that exposed either face positioning or a full face mesh where features like this could be inferred from it, even if we don't have the intentions to work on such features right now, rather than "reserving" a "lower-level" name for something that feels like a bit "higher-level" API.