Closed brettfiedler closed 2 years ago
The above commit adds marker tracking. Next we need to support this rotation in the model, there currently isn't a way to simply rotate all 4 vertices about the center from this single rotation value.
Above commit has this working well unconnected to the physical device. Next we need to change device calibration when connected to the physical device to reduce the size of the modelled quad when it is as large as it can get so there is space for it to rotate without going out of dev bounds.
I got this working today. Marker input can be enabled with query parameter ?markerInput and tested outside of the electron app. But I updated the electron app as well so it can be used there.
Dev version: https://phet-dev.colorado.edu/html/quadrilateral/1.0.0-dev.18/phet/quadrilateral_en_phet.html?showPointerAreas&markerInput&showModelValues Link to electron builds: https://drive.google.com/drive/folders/19SnDQE2STxSu2CpjqaAHEfODU7r44pLF
Enter this information at https://chev.me/arucogen/ to print the marker. Dictionary: "Original Arcuo" Marker ID: "4" Size, mm: "25"
If you want to test before printing, you can load the marker on your phone and try it with the sim to make sure that it causes rotation. Marker detection and rotation data are printed to sim with ?showModelValues
.
Leave plenty of white space around the marker when cutting it out, that is important for Beholder. Here is what mine looks like:
Performance is very poor. Beholder is not able to track quick movements of a marker, even when just looking for rotation. If I move the marker very slowly, beholder keeps up pretty well. But the marker is lost with any reasonable motion.
It really only works if the marker is about a foot away from the camera.
There was a lot of noise in the rotation data. Beholder would report the rotation going back and forth between ~0.05 radians. I tried to filter this out so that it wouldn't jitter when holding still. But the result is that motion doesn't look quite as smooth since each change is more than 0.05 radians.
If the webcam is in use for anything (like the sim in a background tab in Chrome) it cannot be used in the electron app.
Potential ways to improve:
All to be discussed during tomorrows meeting, but wanted to hand back to @BLFiedler in case he wanted to comment or review prior to meetings.
There is an npm library for beholder, it may be quick to try it in our Electron app to see if that performs better.
EDIT: Ran into a wall. And it looks like even in Node.js operations are tied to the DOM so I don't understand how a Node.js app could be faster. Bailing for now.
@zepumph helped investigate ways to make Beholder faster 1) Make the marker much bigger, so it is as if we are holding it close to the camera. 2) Change the marker perimeter options, to stop looking for smaller objects and stop looking for markers that are too big. 3) Add more markers so that maybe if one gets lost, another might remain tracked by beholder so you could leverage that. 4) Specify exactly the markers to track. By default it tracks all of them, but we are only interested in one. 5) @zepumph showed me Mechamarker documentation and its usage in RaP: https://docs.google.com/document/d/1qyMFRYx6kJ3hEvd1E76CYIvcqDAI4tfhyELFZktyNpo/edit 6) Here is a link to the Mechamarkers app code, showing Beholder in use there: https://github.com/atlas-acme-lab/mechamarkers-app 7) Here is the code that allowed communication between Mechamarkers and simulation https://github.com/atlas-acme-lab/mechamarkers-boilerplate 8) https://www.npmjs.com/package/beholder-detection?activeTab=dependents 9) @zepumph said that infrared shoudl work MUCH better. If we are serious about markers maybe we should invest in that. 10) Maybe use a phone attahced to the quad will be faster than a print.
To use the Mechamarkers app: Build the "boilerplate", add it as a preload. It might need to be stepped, maybe not. But it should provide access to markers being watched in Mechamarkers app.
@zepumph also mentioned that having a bigger marker is really important AND that the tape should NOT overlap the marker. Glare really makes a big impact.
Other things:
Old commits to tangible/MarkerInput that reference window.Mechamarkers are in the mechamarkers
branch in gravity-force-lab.
Use "Original ArUco" at https://chev.me/arucogen/
Specify exactly the markers to track. By default it tracks all of them, but we are only interested in one.
Feel free to make this an option to MarkerInput that you can pass into from a subtype.
@zepumph said that infrared shoudl work MUCH better.
Not just infrared, but any nicer webcam. Basically the challenge is all about motion blur. The nicer the camera, the less of that we will have, and the better the library will be at detecting the crisp squares of the markers.
While investigations continue for multiple marker tracking both in browser and through an external app, we are going to take the existing Beholder system that JG & MK made an example implementation of and integrate it into the current Electron app that connects the simulation to the physical device (currently the Tangible Manipulative Quadrilateral from the CHROME lab)
We should also log anything needed by the user to connect (e.g., what marker from the ArCo library will need to be printed, etc)