UPstartDeveloper / headsetsGoodbye

Develop WebVR apps with nothing but a camera. Made w/ love and Google MediaPipe.
2 stars 0 forks source link

Feature 1 #1

Open UPstartDeveloper opened 3 years ago

UPstartDeveloper commented 3 years ago

Build out the first feature (tests then feature)

User story: User is able to move their hand in front of the webcam, to see the controller move on the screen.

UPstartDeveloper commented 3 years ago

This will most likely require using Handpose, and setting up a project similar to the Jenga game on Handsfree.js's website.

UPstartDeveloper commented 3 years ago

There will be 2 main components to this feature, based on the source code of the demo project above:

  1. Need to have a JS to utilize Handsfree.js's Handpose model - may not be the hardest thing, except for the gestures I'll need to configure
  2. Need to use Three.js to set up the scene, e.g. the ball that the user uses their hand controller to pick up, move around, drop, stack on top of other things, etc.
  3. Need to make sure our project also includes HTML/CSS just to make sure the UI looks user-friendly and follows UX conventions.
UPstartDeveloper commented 3 years ago

My personal goal for the next week will be to focus on learning Three.js so we can handle #2, then come back to read more on Handsfree.js so we can integrate the Handpose model, and then finally we'll add Bootstrap for the HTML/CSS.

UPstartDeveloper commented 3 years ago

Engineering Standup: January 18, 2021

  1. Yesterday: Added the HelloCube project, which now contains the code we can use to add a rotating cube to the scene.

  2. Today: will need to go through more of the Three.js fundamentals, and learn more about creating a scene we can use the hand controllers in (from Handsfree.js).

  3. Blockers: the milestones left to complete Feature 1 is mainly 1) having interactive Three.js scenes, and 2) being able to move the camera in the scene. Still need to go through more Three.js docs before I understand that though.

UPstartDeveloper commented 3 years ago

Engineering Standup: January 21, 2021

  1. Previous: Updated the demo HelloCube project to use CSS style, so the UI is more user-friendly (adding sidebars, and making the site more responsive).

  2. For Now: continue through more of the Three.js fundamentals and learn more about creating an interactive scene we can use the hand controllers in (from Handsfree.js).

  3. Blockers: Still need to go through more Three.js docs before I understand enough of the details

UPstartDeveloper commented 3 years ago

Engineering Standup: January 31, 2021

Posting a link to the new WebXR Chrome extension for Handsfree.js for better visibility. Hopefully this will enable better gestures to be added later to the project!

No other progress to report currently.

UPstartDeveloper commented 3 years ago

Engineering Standup: February 7, 2021

  1. Previous: learning from the sample Jenga game on Handsfree.js.
  2. Today: added HelloAnimations, a demo Three.js app that includes interactive controls (press keys 1-8 to make the characters do different movements).
  3. Blockers: this demo app helped me learn some new things about the GLTF file format, so I will continue learning how to properly load *.gltf files using Three.js in the near future. Otherwise, the next step will be making a Three.js app which a user can interface with using hand gestures (most likely using Handsfree.js).
UPstartDeveloper commented 3 years ago

Engineering Standup: February 14, 2021

  1. Previous: learning about how to do object picking in Three.js
  2. Today: adding gestures so we can make hand interactivity a part of Feature 1 for IQ3's product.
  3. Blockers: need to determine how to implement our custom hand controllers by extending the Tensorflow Handpose model.
UPstartDeveloper commented 3 years ago

Engineering Standup: February 17, 2021

  1. Previous: added a sample app to the repo for picking objects
  2. Next Steps: integrating Handsfree.js with the WebXR extension, to enable head gestures for feature 1
  3. Blockers: finding a way to run the sample app from the Handsfree.js repo, built to handle this exact task.
UPstartDeveloper commented 3 years ago

Engineering Standup: February 20, 2021

  1. Previous: downloaded Handsfree.js Chrome extension( (in dev mode) on my personal laptop
  2. Today: added a demo of Handsfree.js face tracking: "Inside the Cube"
  3. Next Steps: finding a way to use multiple Handsfree.js models (for tracking the user) at once, and having them manipulate objects.
UPstartDeveloper commented 3 years ago

Engineering Standup: February 20, 2021

  1. Previous: studied the Handsfree.js docs on the Weboji model, which was utilized in the "Inside the Cube" demo.
  2. Today: Initialized a new demo to do the same thing as the last demo, except it uses Three.js instead of AFrame.
  3. Next Steps: improving 1) smoothness of the camera movements, 2) the frame rate, and 3) possibly the responsiveness of Google MediaPipe. Will be keeping track in #5
UPstartDeveloper commented 3 years ago

Engineering Standup: March 7, 2021

  1. Previous: experimented with different tweeting values through trial and error
  2. Today: released "Cube Space", a new sample app for connecting Handsfree face tracking with Three.js.
  3. Blockers: not sure how to balance between responsiveness and accuracy - we can make the face tracking more responsive by decreasing the time spent tweening, however, it comes at the price of making the camera tracking more unstable (which is probably the fault of the Weboji model in Handsfree.js). On the other hand, it's also a bad UX to make the face tracking too stable, as it can get so slow the user will feel like it's stopped being responsive.
  4. Next Steps: move ahead with object interactions for now, by integrating the Handpose model from Handspose into the app. This way we'll be able to manipulate the coordinates of objects using a finger caster or something like that. If time, we can also add kinematics by using Physijs, a plugin for Three.js.
UPstartDeveloper commented 3 years ago

Hey team, have been up to some research I wanted to share -

This document here includes my notes on XR Accessibility after attending a talk from XR Access, a group that shares leading practices to make XR experiences widely accessible.

UPstartDeveloper commented 3 years ago

Engineering Standup: March 10, 2021

  1. Previous: researched Microsoft Mesh and attended a meetup on XR Accessibility
  2. Today: updated "Cube Space" to show a visual debugger
  3. Blockers: struggling on how to resize the video stream in the debugger, so far my approach has been to use CSS height and width attributes (using the id of the video element shown in the Chrome inspect tool). However, they didn't seem to affect the actual HTML element though.
  4. Next Steps: will move ahead in researching Microsoft Mesh, since it's another XR that seems very promising and we might need to adjust our project to deal with this competing solution.
UPstartDeveloper commented 3 years ago

Engineering Standup: March 12, 2021

  1. Previous: documented notes on Microsoft Mesh and XR industry as a whole in an internal document
  2. Today: updated "Cube Space" to show a visual debugger at the top of the screen, resized it to be smaller
  3. Blockers: need to investigate why the app is inverting the movements of the user
  4. Next Steps: will take another look at the Handsfree.js tutorial to investigate the inverted movements, and then add object interactions with the Handpose model.
UPstartDeveloper commented 3 years ago

Engineering Standup: March 13, 2021

  1. Previous: experimented with different expressions to handle the camera movement for the Cube Space sample app
  2. Today: fixed camera movement in that app
  3. Blockers: not sure why I need to multiply by -1 in x and y directions to get it working since that's not what was needed in the Inside the Cube app (located in lookAround).
  4. Next Steps: going to add object interactions. The first step will be learning how to code up controller classes for the cubes in Handsfree.js/Three.js
UPstartDeveloper commented 3 years ago

Engineering Standup: March 14, 2021

  1. Previous: found an oldy but goody video of SpaceX in 2013 when they implemented an app very similar to this project.
  2. Today: Added some pseudocode for the how we might implement Object Interactions on this Google Doc.
  3. Blockers: Will need to read up more on how to implement custom gestures in Handsfree.js
  4. Next Steps: Will look at the sample Handsfree Jenga game project by Oz Ramos on Glitch.
UPstartDeveloper commented 3 years ago

Engineering Standup: March 20, 2021

  1. Previous: read more into how plugins work in Handsfree.js, and realized we have been doing it all wrong (i.e. inefficiently) thus far.
  2. Today: Improved modularity of the Cube Space app, and hopefully now the face tracking is a whole lot less jittery.
  3. Blockers: I will need to look into why the cubes follow your face, rather than stay as they should in the scene. It could possibly be the inverted camera movements that were "fixed" earlier - in which cause we'll just revert the change, by removing the -5 in lines 69-70 of the face tracking module of Cube Space.
  4. Next Steps: Add object interactions using newfound knowledge of plugins - notes on Google Doc
UPstartDeveloper commented 3 years ago

Engineering Standup: March 27, 2021

  1. Previous: realized that before object interactions are possible, we need to first integrate multiple models into 1 app. The sequence we therefore need to follow is 1) combine the weboji and handpose models (#13 ), 2) add a plugin for object picking, 3) add another plugin for dragging picked objects
  2. Today: I added boilerplate code for combining the handpose and weboji models. There might be some errors in using them for real though.
  3. Blockers: For some reason the handpose model seems to have trouble in getting its video stream when used with the weboji model. So far I am not exactly sure why this happens, however from reading console logs and reading the docs on the original Jeeliz Weboji model documentation, I believe it's because Handsfree.js is trying to use the Weboji API for getting the video stream for the Handpose model (and in version 8.4, the handpose has a separate API for getting the video stream).
  4. UPDATE: we fixed this by not loading both models in at the same time, but instead loading in weboji first, and then loading in handpose later by using handsfree.update. Only question now is that for some reason, the handpose model isn't showing up on the debugger, and that it can supposedly slow down the app (which can be remedied by importing the CPU-only version of Tensorflow.js to use in the backend).
  5. Next Steps: Going to keep following the notes on Google Doc. Next step is to try to add event listeners to the cubes, and a gesture (using a Handsfree.js plugin) to grab them.
UPstartDeveloper commented 3 years ago

Engineering Standup: March 31, 2021

  1. Previous: read through more of Oz Ramos' example project using 3D hand tracking, the Jenga Game, on Glitch.
  2. Today: Initialized the 3D hand tracking in our own project.
  3. Blockers: For some reason the handpose model seems to not be enabled, even after I called the handsfree.model.handpose.enable() function.
  4. Next Steps: Going to take another look at the documentation on Handsfree.js.org.
UPstartDeveloper commented 3 years ago

Engineering Standup: April 1, 2021

  1. Previous: read through more of the Handsfree.js documentation.
  2. Today: Fixed the error in the hand not being recognized. The source code for the hand tracking script was using a deprecated name to reference the model (hands, instead of handpose).
  3. Blockers: The app is a lot slower now that it is running Tensorflow.js. Will probably need to switch to using the CPU-only version of this package. This will probably require overriding the handsfree dependency as well, so we might need to using a Node.js backend.
  4. Next Steps: Going to take another look at the documentation on Tensorflow.js page on NPM, to see how to import it as easily as possible.
UPstartDeveloper commented 3 years ago

Engineering Standup: April 2, 2021

  1. Previous: read more on the documentation of the Handpose model itself on GitHub, and learned a couple things:
    1. The Tensorflow.js library itself has separate backends for CPU, WebGL, and wasm aka WebAssembly.
    2. The Handpose model only allows wasm or webgl backends.
    3. In the Handsfree.js library, you are only allowed to use webgl
  2. Today: I turned off weboji, the face tracking model, in case that would speed up the app when using the hand tracking model, handpose. It did not, still very laggy.
  3. Blockers: I think we may need to put object interactions on hold until we find a way to do one of the following to improve the performance of the hand tracking model. Some of the options for doing this include:
    1. Add support for a CPU/possibly WebGPU backend to the handpose model (started a discussion in the Tensorflow.js Google group about this)
    2. Maybe add support for the Handpose model to use the wasm backend in Handsfree.js, and maybe that improves performance. Started an issue on that here
    3. Throw the handpose model out entirely. Try to make object interactions work with the 2D hands model, or a model made in another frontend deep learning library entirely. We would start with client-side frameworks for the web, like perhaps ConvNetJS, and if need be maybe even go outside the browser and see if it's worth doing this on mobile, desktop, etc.
  4. Next Steps: Going to try one more thing here, no reason to give up just yet:
    1. Going to see if we can activate the CPU backend a different way using environment variables, as described in this Stack Overflow answer.
    2. If that fails, we'll see if we can make two separate buttons to activate only face or hands tracking for now, and see if we can work on another feature in the time being.
UPstartDeveloper commented 3 years ago

Engineering Standup: April 3, 2021

  1. Previous: read the Stack Overflow answer above, and learned how to import modules via a CDN and how to use the jsDelivr.
  2. Today: I subclasses a version of the handpose model that would have better performance on the CPU.
  3. Blockers: However I have not been able to use it in the app yet, due to some errors in importing the fingerpose library which the handpose models needs in order to work. This is because so far my approach has been to import it via jsDelivr, however it isn't always able to find files if those aren't imported with their file extension (an example of this is here).
  4. Next Steps: going to try to just save the entire fingerpose library locally, see if that resolves the dependency issues, and if that in turn fixes the performance issues.
UPstartDeveloper commented 3 years ago

Engineering Standup: April 5, 2021

  1. Previous: read the documentation eye-tracking models such as WebGazer.js, and the FaceMesh model from Tensorflow.js.
  2. Today: I added another basic demo app for just the FaceMesh model. Just to see how it tracks "landmarks", which are the coordinates of different parts of a face.
  3. Blockers: still not sure if this model will be able to run smoothly when also running along inside of a Three.js environment.
  4. Next Steps: going to focus on emotion detection this week, and will try to reduce the load on the CPU by using FaceMesh to both do face-tracking and emotion display as in issue #12
UPstartDeveloper commented 3 years ago

Engineering Standup: April 7, 2021

  1. Previous: N/A
  2. Today: Re-implemented face tracking using the FaceMesh model, and it is now available inside of the faceAndHands app.
  3. Blockers:
    1. for some reason the change of model from weboji to facemesh seems to have moved the debug window from the center of the top of the DOM, down to the bottom left corner.
    2. Also not really a blocker, but while the facemesh model is much more responsive than weboji, it is also seems incapable of recovering if the user moves their head too quickly out of view (such as really far to the left/right).
    3. Lastly, the facemesh model does not return a Z-coordinate, at least not in the landmark module. So for now I have set the z-coordinate permanently at a value of 8 - however this value is hard-coded and will have to be changed for every single different environment we use.
  4. Next Steps: going to clarify how emotion detection and eye tracking will work, work on those, and then eventually circle back to hand tracking.
UPstartDeveloper commented 3 years ago

Engineering Standup: April 20, 2021

  1. Previous: worked on #12
  2. Today: initialized functions for dragging the box across the screen.
  3. Blockers: not sure how to make sure just one and only one box is selected at a given time.
  4. Next Steps:
    1. Implement plugin for hand tracking - fire MouseEvents for when the box is selected (mousedown), when it is being moved (mousemove), and when it is released (mouseup).
    2. Add functions in cubes.js so the boxes respond to those events
    3. Work on the performance of the handpose and facemesh model working together
    4. highight wherever the hand is on the screen at any given moment
    5. move the debuggers to the top of the screen, and as a stretch overlay them
    6. Update the GIF on the home page, to match the UI updates
UPstartDeveloper commented 3 years ago

Engineering Standup: May 1, 2021

  1. Previous: worked on #12
  2. Today: re-initialized functions for dragging the box across the screen.
  3. Blockers: the example of the Jenga game seems to have a deprecated API. Particularly, I will need to find the property that has replaced the hand.pointer property in the version we are using of Handsfree.js
  4. Next Steps:
    1. Refactor the trackHand function to use the newer API. UPDATE: for now we can just initialize the pointer ourselves using a JS Map object.
    2. Add functions in cubes.js so the boxes respond to those events
    3. Work on the performance of the handpose and facemesh model working together
    4. highight wherever the hand is on the screen at any given moment
    5. move the debuggers to the top of the screen, and as a stretch overlay them
    6. Update the GIF on the home page, to match the UI updates
UPstartDeveloper commented 3 years ago

Engineering Standup: May 2, 2021

  1. Previous: fixed the issue in initializing hand.pointer
  2. Today: refreshed memory on how to do object picking, so we can highlight the pointer
  3. Blockers: we'll need to look more at the Three.js docs, to see how we can color the pointer on the 2D screen that the index finger is located on. Then we can illustrate that the user's hand is being detected the way we think it is by the handpose model.
  4. Next Steps:
    1. highlight wherever the hand is on the screen at any given moment - for this we can use the first two values in the handpose.annotations.indexFinger[3] array
    2. Add functions in cubes.js so the boxes respond to those events
    3. Work on the performance of the handpose and facemesh model working together
      1. one idea might be reducing the number of unused Three.js objects, such as those in handpose.model.three
    4. move the debuggers to the top of the screen, and as a stretch overlay them
    5. Update the GIF on the home page, to match the UI updates
UPstartDeveloper commented 3 years ago

Quick update: for now I'll switch to highlighting a single pointer rather than the whole handpose model - it might keep things simpler for the user since they'll know that's what the app primarily cares about when detecting their gestures.

We'll need to UI/UX test this to confirm of course, date TBD.

UPstartDeveloper commented 3 years ago

Engineering Standup: May 6, 2021

  1. Previous: read through docs and tutorials on how picking works in Three.js
  2. Today: shelved making the pointer appear for now, added boilerplate code to handle object picking based on the handpose model, using one of the Three.js tutorials.
  3. Blockers: need to make sure we come up with a unique event for calling the object picker, so it doesn't confuse a regular mouse movement or finger touch on the screen for the handpose model.
  4. Next Steps:
    1. highlight wherever the hand is on the screen at any given moment - for this we can use the first two values in the handpose.annotations.indexFinger[3] array
    2. Add functions in cubes.js so the boxes respond to those events - TEST this tomorrow
    3. Work on the performance of the handpose and facemesh model working together
      1. one idea might be reducing the number of unused Three.js objects, such as those in handpose.model.three
    4. move the debuggers to the top of the screen, and as a stretch overlay them
    5. Update the GIF on the home page, to match the UI updates
    6. see if we need to add additional EventListeners for mobile users?

Resource: another good read for managing multiple canvases in Three.js - might help with optimizing performance in the future.

UPstartDeveloper commented 3 years ago

Engineering Standup: May 7, 2021

  1. Previous: read through docs and tutorials on how mouse events work in JS
  2. Today: initialized an event handler to drag the cubes via the mouse - causes the cubes to flash red/yellow while selected.
  3. Blockers: for some reason the event handler is not actually updating the coordinates of the cube object itself, and also they need to be normalized
  4. Next Steps:
    1. Fix the mousemove event handler in cubes.js
    2. highlight wherever the hand is on the screen at any given moment - for this we can use the first two values in the handpose.annotations.indexFinger[3] array
    3. use the handpose model in place of the mouse, to be able to select, drag, and de-select the cubes.
    4. Work on the performance of the handpose and facemesh model working together
      1. one idea might be reducing the number of unused Three.js objects, such as those in handpose.model.three
    5. move the debuggers to the top of the screen, and as a stretch overlay them
    6. Update the GIF on the home page, to match the UI updates
    7. see if we need to add additional EventListeners for mobile users?
UPstartDeveloper commented 3 years ago

Quick update, no progress as of today except I discovered the DragControls class in Three.js can be a useful tool to look into using to implement mouse-based object drag-and-drop.

UPstartDeveloper commented 3 years ago

Engineering Standup: May 9, 2021

  1. Previous: read more on the DragControls class provided by Three.js
  2. Today: found two alternatives to use the above tool, so we have more options: ThreeDragger and three-dragcontrols. Both of these were created by Qingrong Ke.
  3. Blockers: currently looking into using the ThreeDragger as well, and I will need to look more into how and where NPM installs Node dependencies. This is because even though I installed, we won't be able to use it unless there's a relative file path to the module in the code. The specific arises at the top of cubes.js, on the import line:
    import ThreeDragger from 'three-dragger';

    And the error message on the Inspect tool says:

    Uncaught TypeError: Failed to resolve module specifier "three-dragger". Relative references must start with either "/", "./", or "../".
  4. Next Steps:
    1. Fix the drag-n-drop and mousemove event handlers in cubes.js
    2. highlight wherever the hand is on the screen at any given moment - for this we can use the first two values in the handpose.annotations.indexFinger[3] array
    3. use the handpose model in place of the mouse, to be able to select, drag, and de-select the cubes.
    4. Work on the performance of the handpose and facemesh model working together
      1. one idea might be reducing the number of unused Three.js objects, such as those in handpose.model.three
    5. move the debuggers to the top of the screen, and as a stretch overlay them
    6. Update the GIF on the home page, to match the UI updates
    7. see if we need to add additional EventListeners for mobile users?
UPstartDeveloper commented 3 years ago

Engineering Standup: May 11, 2021

  1. Previous: read more on tools to use for drag-n-drop beyond DragControls.
  2. Today: experimented in implementing this using vanilla Javascript, as discussed in this blog post.
  3. Blockers: in the spirit of Thomas Edison, I have found out more ways to not implement this feature:
    1. three-dragcontrols seems to just be a clone of the DragControls that come with Three.js.
    2. ThreeDragger doesn't look like a good solution anymore, because it doesn't have solely relative dependencies it would be unrealistic to use it without our own server and without storing all the source files we'd ever need on our own machine.
    3. turning off the cube rotation and/or the facemesh models also didn't seem to help either. Neither did calling the activate function on the controls object.
    4. Didn't work: removing use of the PickHelper and trying to do all the cubes responses to the mouse happen w/ the DragControls.
  4. Next Steps: (same as above).
UPstartDeveloper commented 3 years ago

Engineering Standup: May 12, 2021

  1. Previous: read more on tools to use for drag-n-drop beyond DragControls.
  2. Today: experimented with going back to object picking.
  3. Blockers: so object picking is working as a way to drag the cube, however it only moves by an inch or so.
    1. I believe this is happening because the function is not being continually called, so the way forward might be in looping through this function somehow (without interrupting the main render loop of course).
    2. And then of course we also need to make sure we edit the XYZ coordinates that the cube object is actually storing, not just temporarily animate it to go to a specific point in 3D space (which it seems is what Tweening is doing).
  4. Next Steps: (same as above).