Open UPstartDeveloper opened 3 years ago
This will most likely require using Handpose, and setting up a project similar to the Jenga game on Handsfree.js's website.
There will be 2 main components to this feature, based on the source code of the demo project above:
My personal goal for the next week will be to focus on learning Three.js so we can handle #2, then come back to read more on Handsfree.js so we can integrate the Handpose model, and then finally we'll add Bootstrap for the HTML/CSS.
Engineering Standup: January 18, 2021
Yesterday: Added the HelloCube project, which now contains the code we can use to add a rotating cube to the scene.
Today: will need to go through more of the Three.js fundamentals, and learn more about creating a scene we can use the hand controllers in (from Handsfree.js).
Blockers: the milestones left to complete Feature 1 is mainly 1) having interactive Three.js scenes, and 2) being able to move the camera in the scene. Still need to go through more Three.js docs before I understand that though.
Engineering Standup: January 21, 2021
Previous: Updated the demo HelloCube project to use CSS style, so the UI is more user-friendly (adding sidebars, and making the site more responsive).
For Now: continue through more of the Three.js fundamentals and learn more about creating an interactive scene we can use the hand controllers in (from Handsfree.js).
Blockers: Still need to go through more Three.js docs before I understand enough of the details
Engineering Standup: January 31, 2021
Posting a link to the new WebXR Chrome extension for Handsfree.js for better visibility. Hopefully this will enable better gestures to be added later to the project!
No other progress to report currently.
Engineering Standup: February 7, 2021
*.gltf
files using Three.js in the near future. Otherwise, the next step will be making a Three.js app which a user can interface with using hand gestures (most likely using Handsfree.js). Engineering Standup: February 14, 2021
Engineering Standup: February 17, 2021
Engineering Standup: February 20, 2021
Engineering Standup: February 20, 2021
Engineering Standup: March 7, 2021
Engineering Standup: March 10, 2021
id
of the video element shown in the Chrome inspect tool). However, they didn't seem to affect the actual HTML element though.Engineering Standup: March 12, 2021
Engineering Standup: March 13, 2021
Cube Space
sample appInside the Cube
app (located in lookAround
). Engineering Standup: March 14, 2021
Engineering Standup: March 20, 2021
Engineering Standup: March 27, 2021
handpose
model seems to have trouble in getting its video stream when used with the weboji
model. So far I am not exactly sure why this happens, however from reading console logs and reading the docs on the original Jeeliz Weboji model documentation, I believe it's because Handsfree.js is trying to use the Weboji API for getting the video stream for the Handpose model (and in version 8.4, the handpose
has a separate API for getting the video stream). weboji
first, and then loading in handpose
later by using handsfree.update
. Only question now is that for some reason, the handpose
model isn't showing up on the debugger, and that it can supposedly slow down the app (which can be remedied by importing the CPU-only version of Tensorflow.js to use in the backend).Engineering Standup: March 31, 2021
handpose
model seems to not be enabled, even after I called the handsfree.model.handpose.enable()
function.Engineering Standup: April 1, 2021
hands
, instead of handpose
).handsfree
dependency as well, so we might need to using a Node.js backend.Engineering Standup: April 2, 2021
wasm
aka WebAssembly.wasm
or webgl
backends.webgl
weboji
, the face tracking model, in case that would speed up the app when using the hand tracking model, handpose
. It did not, still very laggy. handpose
model (started a discussion in the Tensorflow.js Google group about this)wasm
backend in Handsfree.js, and maybe that improves performance. Started an issue on that hereEngineering Standup: April 3, 2021
handpose
model that would have better performance on the CPU.fingerpose
library which the handpose
models needs in order to work. This is because so far my approach has been to import it via jsDelivr, however it isn't always able to find files if those aren't imported with their file extension (an example of this is here).fingerpose
library locally, see if that resolves the dependency issues, and if that in turn fixes the performance issues.Engineering Standup: April 5, 2021
Engineering Standup: April 7, 2021
FaceMesh
model, and it is now available inside of the faceAndHands app.weboji
to facemesh
seems to have moved the debug window from the center of the top of the DOM, down to the bottom left corner. facemesh
model is much more responsive than weboji
, it is also seems incapable of recovering if the user moves their head too quickly out of view (such as really far to the left/right).facemesh
model does not return a Z-coordinate, at least not in the landmark module. So for now I have set the z-coordinate permanently at a value of 8
- however this value is hard-coded and will have to be changed for every single different environment we use.Engineering Standup: April 20, 2021
MouseEvent
s for when the box is selected (mousedown
), when it is being moved (mousemove
), and when it is released (mouseup
).cubes.js
so the boxes respond to those events handpose
and facemesh
model working togetherEngineering Standup: May 1, 2021
hand.pointer
property in the version we are using of Handsfree.jstrackHand
function to use the newer API. UPDATE: for now we can just initialize the pointer
ourselves using a JS Map
object.cubes.js
so the boxes respond to those events handpose
and facemesh
model working togetherEngineering Standup: May 2, 2021
hand.pointer
handpose
model.handpose.annotations.indexFinger[3]
arraycubes.js
so the boxes respond to those events handpose
and facemesh
model working together
handpose.model.three
Quick update: for now I'll switch to highlighting a single pointer rather than the whole handpose
model - it might keep things simpler for the user since they'll know that's what the app primarily cares about when detecting their gestures.
We'll need to UI/UX test this to confirm of course, date TBD.
Engineering Standup: May 6, 2021
handpose
model, using one of the Three.js tutorials. handpose
model. handpose.annotations.indexFinger[3]
arraycubes.js
so the boxes respond to those events - TEST this tomorrowhandpose
and facemesh
model working together
handpose.model.three
Resource: another good read for managing multiple canvases in Three.js - might help with optimizing performance in the future.
Engineering Standup: May 7, 2021
mousemove
event handler in cubes.js
handpose.annotations.indexFinger[3]
arrayhandpose
model in place of the mouse, to be able to select, drag, and de-select the cubes.handpose
and facemesh
model working together
handpose.model.three
Quick update, no progress as of today except I discovered the DragControls class in Three.js can be a useful tool to look into using to implement mouse-based object drag-and-drop.
Engineering Standup: May 9, 2021
cubes.js
, on the import line:
import ThreeDragger from 'three-dragger';
And the error message on the Inspect tool says:
Uncaught TypeError: Failed to resolve module specifier "three-dragger". Relative references must start with either "/", "./", or "../".
mousemove
event handlers in cubes.js
handpose.annotations.indexFinger[3]
arrayhandpose
model in place of the mouse, to be able to select, drag, and de-select the cubes.handpose
and facemesh
model working together
handpose.model.three
Engineering Standup: May 11, 2021
DragControls
.DragControls
that come with Three.js.facemesh
models also didn't seem to help either. Neither did calling the activate
function on the controls
object.PickHelper
and trying to do all the cubes responses to the mouse happen w/ the DragControls
.Engineering Standup: May 12, 2021
DragControls
.
Build out the first feature (tests then feature)
User story: User is able to move their hand in front of the webcam, to see the controller move on the screen.