MacOS has some pretty powerful machine learning capabilities.
What if Vimac used Core ML vision to learn what are clickable items on the screen and created hints for them. This would work with any app that could be trained on.
Might even be faster than iterating through the accessibility apis. :)
MacOS has some pretty powerful machine learning capabilities.
What if Vimac used Core ML vision to learn what are clickable items on the screen and created hints for them. This would work with any app that could be trained on.
Might even be faster than iterating through the accessibility apis. :)