UPstartDeveloper / headsetsGoodbye

Develop WebVR apps with nothing but a camera. Made w/ love and Google MediaPipe.
2 stars 0 forks source link

Feature 3: Emotion Detection and Animations #12

Open UPstartDeveloper opened 3 years ago

UPstartDeveloper commented 3 years ago

"One other idea to consider that could be quite powerful. Is it possible to extract facial expressions to drive an avatar? Look at the face mesh in the Mediapipe Google example.

Extract a fixed list of facial expressions: happy, sad, curious, surprise, etc. ---> map to predefined face animations. Here is a threejs example (starting point) https://threejs.org/examples/#webgl_animation_skinning_morph

UPstartDeveloper commented 3 years ago

Engineering Standup: April 10, 2021

  1. Previous: N/A
  2. Today: Added pseudocode for the different approaches to this feature on this Google Doc.
  3. Blockers: not sure if we are off using the Jeeliz Weboji model, or Tensorflow.js' Facemesh model. One is more specific to our use case, yet the other has more reliable performance.
  4. Next Steps: going to clarify how objects are loaded in Three.js, since that could also influence our choice of model.
UPstartDeveloper commented 3 years ago

Engineering Standup: April 11, 2021

  1. Previous: developed pseudocode for the emotion tracking feature.
  2. Today: started a new branch that has the robot with predefined animations, and incorporated the popular face-api.js NPM package to be able to detect emotions in real-time via the webcam.
  3. Blockers: not sure what exact parts of the Three.js example I used for the robot scene control specific emotions.
  4. Next Steps: (listed by priority)
    1. need to use the emotions detected by the Tensorflow.js model as triggers to animate the robot.
    2. add a loading screen to prevent freezes - show the loading screen while model loads, then load the robot scene
    3. add arms to the robot model
    4. extending the project to include more emotions
    5. adding a debugger?
UPstartDeveloper commented 3 years ago

Engineering Standup: April 12, 2021

  1. Previous: initialized the Three.js scene and the expression tracking model.
  2. Today: I went ahead and learned the face object is what can adjust the robot's expressions.
  3. Blockers: having trouble taking the face variable out of the loader.load call and returning it, so that it can be used in the script for the expression tracking.
  4. Next Steps: will need to look more into how callback functions and scopes work in JS.
UPstartDeveloper commented 3 years ago

Engineering Standup: April 14, 2021

  1. Previous: found a way to edit the global face variable from within the loader.load function.
  2. Today: Deployed a basic version of the expression tracking app to the repo. Will need to add it to README later.
  3. Blockers: need to improve latency.
  4. Next Steps: after that will need to look into making it more usable - e.g. adding a loading animation that plays while the Three.js and Tensorflow.js dependencies load.
UPstartDeveloper commented 3 years ago

Engineering Standup: April 21, 2021

  1. Previous: got feedback on how to improve the robot demo.
  2. Today: improved the presentation of the robot demo - added a title page, hid the freezes by making the robot stand out by standing still. Added a button to also let the user stop the demo without refreshing.
  3. Blockers: still trying to find the balance between slow animations, and ones that are too responsive.
  4. Next Steps: (listed by priority)
    1. Improving the balance between finding latency and stability in animations
    2. extending the project to include more emotions (e.g. disgusted, fearful, happy)
    3. improving the loading animation
    4. add arms to the robot model
UPstartDeveloper commented 3 years ago

Engineering Standup: May 29, 2021

  1. Previous: wrote a write-up of the progress thus far.
  2. Today: Found that perhaps we can create an even more seamless UX if we use the Weboji model - we could follow this tutorial with the original Robot model here.
  3. Blockers: This model requires we cut down Roberto to being just a head, and also we will need to make more shape keys before we can generate the meshes needed for the Weboji model to animate properly.
  4. Next Steps: going to check in w/ other project leads on this.
    1. We can look at this video to decide if the new Weboji model has the stability or responsiveness for our needs.
    2. Also, we can follow this spec to guide us in making the new shape keys for Weboji.
    3. For now, I will continue w/ adding expanded emotion animations. This will stick to the approach we have w/ the face-api.js model right now, predicting user expressions and activating animations on the robot accordingly.
UPstartDeveloper commented 3 years ago

Engineering Standup: May 30, 2021

  1. Previous: opened up PR #24 to keep a record of how the new Roberto model is working.

  2. Today:

    1. Loaded in the new model (with additional shape keys for "happy", "disgust", and "afraid" expressions.)
    2. Then I inserted the URL to load this GlTF file, in robot.js on line 61.
    3. Finally, to update the animation loop: I added three lines of code to step C of the alterExpressions function to account for the new expressions, so it looked like this:
      // C: only change if new detections are different from existing values
      const newEmotionValues = [
          detections[0].expressions.angry,
          detections[0].expressions.surprised,
          detections[0].expressions.sad,
          detections[0].expressions.happy,
          detections[0].expressions.afraid,  
      ];
  3. Blockers: For some reason, this new model is still missing the robot arms and is silver in color (the image will be linked below). Here is the link to the .blend file that the GLTF file was created from, as I am still trying to learn more about exporting .blend files to .glb. Also, this error image appeared on the console, indicating there may be something wrong with the new animation loop:

        TypeError: Cannot read property 'morphTargetDictionary' of undefined
    at createGUI (robot.js:167)
  4. Next Steps: I think we'll to better understand how to export .blend files to GLTF. One thing that may help is looking that the RobotV2_report.json file I uploaded (in the same expressionTrackingDemos/models/ folder) that could help us validate if the process I used (clicking the export button in Blender) is working ok or not. Then, for the animation loop, I could try and see what properties are actually accessible in the morphTargetDictionary object.

Silver Robot image How do we make sure the colors of a .blend model stay with it when we export to GLTF?