vladmandic / human

Human: AI-powered 3D Face Detection & Rotation Tracking, Face Description & Recognition, Body Pose Tracking, 3D Hand & Finger Tracking, Iris Analysis, Age & Gender & Emotion Prediction, Gaze Tracking, Gesture Recognition
https://vladmandic.github.io/human/demo/index.html
MIT License
2.39k stars 326 forks source link

You may need an additional loader to handle the result of these loaders. #204

Closed ryansaam closed 3 years ago

ryansaam commented 3 years ago

Issue Description Unable to run project

Steps to Reproduce I followed the "2.2 With Bundler" install steps here

Expected Behavior To have the project run

Environment React.js, Node v16.13.0

import Webcam from 'react-webcam'

import { drawMesh, drawPose } from "./utilities.js"

function App() { const [updateCanvas, setUpdateCanvas] = useState(0) const camera = useRef(null) const canvas = useRef(null) const times = useRef([]) const fps = useRef(0) const fpsDisplay = useRef(null)

const human = new Human()

useEffect(() => { let timeoutId; if ( typeof camera.current !== undefined && camera.current !== null ) { const width = camera.current.video.clientWidth const height = camera.current.video.clientHeight

  canvas.current.width = width
  canvas.current.height = height

  console.log(camera.current.video.clientWidth)
  console.log(camera.current.video.clientHeight)
  if (updateCanvas < 2) {
    timeoutId = setTimeout(() => {setUpdateCanvas(updateCanvas + 1)}, 1000)
  } else {
    clearInterval(timeoutId)
  }

  if (updateCanvas === 1) {
    detectCallBack()
  }
}
return () => { clearTimeout(timeoutId) }

}, [camera.current, updateCanvas])

const detectCallBack = useCallback(() => { runMLModels() }, [])

async function runMLModels() {

// load face mesh
const model = await facemesh.load(
  facemesh.SupportedPackages.mediapipeFacemesh,
  { maxFaces: 3}
)
console.log("facemesh loaded")

// load pose detection
const detectorConfig = { modelType: poseDetection.movenet.modelType.MULTIPOSE_LIGHTNING }
const detector = await poseDetection.createDetector(poseDetection.SupportedModels.MoveNet, detectorConfig)
console.log("posedetection loaded")

async function detect() {
  // estimate faces
  const predictions = await model.estimateFaces({
    input: camera.current.video,
    //flipHorizontal: true,
    predictIrises: false
  })

  // get poses
  const poses = await detector.estimatePoses(camera.current.video)

  // set canvas
  const ctx = canvas.current.getContext("2d")
  ctx.clearRect(0, 0, canvas.current.clientWidth, canvas.current.clientHeight)

  // draw
  drawMesh(predictions, ctx)
  drawPose(poses, ctx)

  // age and emotion detection
  // select input HTMLVideoElement and output HTMLCanvasElement from page
  // perform processing using default configuration
  // const result = await human.detect(camera.current.video)
  // human.draw.canvas(result.canvas, canvas.current)
  // human.draw.face(outputCanvas, result.face)
  // result object will contain detected details
  // as well as the processed canvas itself
  // so lets first draw processed frame on canvas
  // then draw results on the same canvas
  // loop immediate to next frame

  // FPS Calculation
  const now = performance.now()
  while (times.current.length > 0 && times.current[0] <= now - 1000) {
    times.current.shift()
  }
  times.current.push(now)
  fps.current = times.current.length

  fpsDisplay.current.textContent = `FPS: ${fps.current}`

  window.requestAnimationFrame(detect)
}

window.requestAnimationFrame(detect)

}

return (

FPS: {fps.current}
</div>

); }

export default App;


- Type of module used: `esm-nobundle`
- TensorFlow/JS version: ^3.11.0
- Browser Chrome 95.0.4638.69
- OS and Hardware platform: MacOS
- Packager: webpack (default with npx create-react-app)
- Framework: React

**Diagnostics**

- Check out any applicable [diagnostic steps](https://github.com/vladmandic/human/wiki/Diag)

**Additional**

- For installation or startup issues include your `package.json`

{ "name": "cv-debug", "version": "0.1.0", "private": true, "dependencies": { "@mediapipe/pose": "^0.4.1633558788", "@tensorflow-models/face-landmarks-detection": "^0.0.1", "@tensorflow-models/pose-detection": "^0.0.6", "@tensorflow/tfjs": "^3.11.0", "@tensorflow/tfjs-converter": "^3.11.0", "@testing-library/jest-dom": "^5.11.4", "@testing-library/react": "^11.1.0", "@testing-library/user-event": "^12.1.10", "@vladmandic/human": "^2.4.3", "react": "^17.0.2", "react-dom": "^17.0.2", "react-scripts": "4.0.3", "react-webcam": "^6.0.0", "web-vitals": "^1.0.1" }, "scripts": { "start": "react-scripts start", "build": "react-scripts build", "test": "react-scripts test", "eject": "react-scripts eject" }, "eslintConfig": { "extends": [ "react-app", "react-app/jest" ] }, "browserslist": { "production": [ ">0.2%", "not dead", "not op_mini all" ], "development": [ "last 1 chrome version", "last 1 firefox version", "last 1 safari version" ] } }


Error:

version.ts:4 Uncaught Error: Module parse failed: Unexpected token (512:106) File was processed with these loaders:

  • ./node_modules/react-scripts/node_modules/babel-loader/lib/index.js You may need an additional loader to handle the result of these loaders. | let target = null; | let flipY = false;
    if (drawCount === 0) source = sourceTexture;else source = getTempFramebuffer(currentFramebufferIndex)?.texture || null;
    drawCount++;

    at Object../node_modules/@vladmandic/human/dist/human.esm-nobundle.js (version.ts:4) at webpack_require (bootstrap:856) at fn (bootstrap:150) at Module. (App.css?dde5:82) at Module../src/App.js (App.js:137) at webpack_require (bootstrap:856) at fn (bootstrap:150) at Module. (index.css?bb0a:82) at Module../src/index.js (index.js:18) at webpack_require (bootstrap:856) at fn (bootstrap:150) at Object.1 (utilities.js:71) at webpack_require (bootstrap:856) at checkDeferredModules (bootstrap:45) at Array.webpackJsonpCallback [as push] (bootstrap:32) at main.chunk.js:1

Error:

index.js:1 ./node_modules/@vladmandic/human/dist/human.esm-nobundle.js 512:106
Module parse failed: Unexpected token (512:106)
File was processed with these loaders:
 * ./node_modules/react-scripts/node_modules/babel-loader/lib/index.js
You may need an additional loader to handle the result of these loaders.
|     let target = null;
|     let flipY = false;
>     if (drawCount === 0) source = sourceTexture;else source = getTempFramebuffer(currentFramebufferIndex)?.texture || null;
|     drawCount++;
|
vladmandic commented 3 years ago

Human is delivered as ES2020 module, but amazingly even latest create-react-app creates an app setup to use very old babel 7.0 which is not compatible with ES2020
(ES2020 support was introduced in Babel 7.8 which was released in January 2020, I don't know why FB uses such really old versions in create-react-app)

You can either update your environment or update Human to latest one from github (2.5) as I've just posted an update that includes polyfils for ES2018

Note that Human 2.5 is not yet released on NPM (likely next week), but you can install it using
npm install git+https://github.com/vladmandic/human