CodeLabClub / scratch3_knn

Scratch3 extension: knn classifier
22 stars 14 forks source link

有关canvas的问题 #7

Closed mickwubs97 closed 4 years ago

mickwubs97 commented 4 years ago

抱歉,但我还是觉得需要提一个问题。 利用 @summerscar 给出的办法,现在face-api已经能运行出结果了。控制台显示如下: image 但现在const canvas = faceapi.createCanvasFromMedia(this.video)语句似乎无法正常执行,控制台显示为: createCanvas.js:16 Uncaught Error: createCanvasFromMedia - media has not finished loading yet at Module.createCanvasFromMedia (createCanvas.js:16) at Scratch3faceapi.faceDetection (index.js:733) at extension-manager.js:428 at blockInfo.func (extension-manager.js:435) at execute (execute.js:523) at Sequencer.stepThread (sequencer.js:212) at Sequencer.stepThreads (sequencer.js:128) at Runtime._step (runtime.js:2031) at runtime.js:2552 这是this.video和video有差别吗?还是const canvas = faceapi.createCanvasFromMedia(this.video)语句需要放在别处?或者可能有其他问题?

因为我想使用faceapi.draw的下属几个函数,所以需要这个canvas。 多次打扰,确实不好意思。我觉得离有个阶段性结果很近了,所以希望能直接得到些建议。 十分感谢。

faceDetection()部分代码如下: faceDetection() { if (this.globalVideoState === VideoState.OFF) { console.log('请先打开摄像头') return }else { this.runtime.ioDevices.video.enableVideo().then(() => { // 获得video数据 this.video = this.runtime.ioDevices.video.provider.video //this.video.width = this.runtime.ioDevices.video.provider.video.width //this.video.height = this.runtime.ioDevices.video.provider.video.height }); }

    //const video = document.getElementsByTagName('video')
    //video.addEventListener('play', () => {
        const canvas = faceapi.createCanvasFromMedia(this.video)
        //console.log("fffff")
        document.body.append(canvas)
        const displaySize = {width: 408 , height: 306}
        faceapi.matchDimensions(canvas, displaySize)
        setInterval(async () => {
          //const detections1 = await faceapi.detectSingleFace(this.video, new faceapi.TinyFaceDetectorOptions({ inputSize: 224, scoreThreshold: 0.5 }))
          const detections = await faceapi.detectAllFaces(this.video, new faceapi.TinyFaceDetectorOptions({ inputSize: 224, scoreThreshold: 0.5 })).withFaceLandmarks().withFaceExpressions()
          //console.log(detections1)
          console.log(detections)
          const resizedDetections = faceapi.resizeResults(detections, displaySize)
          canvas.getContext('2d').clearRect(0, 0, canvas.width, canvas.height)
          faceapi.draw.drawDetections(canvas, resizedDetections)
          faceapi.draw.drawFaceLandmarks(canvas, resizedDetections)
          faceapi.draw.drawFaceExpressions(canvas, resizedDetections) 
        }, 100)

      //})
    }
summerscar commented 4 years ago

插件编写官方文档

if (this.globalVideoState === VideoState.OFF) {
console.log('请先打开摄像头')
return
}else {
this.runtime.ioDevices.video.enableVideo().then(() => {
// 获得video数据
this.video = this.runtime.ioDevices.video.provider.video
//this.video.width = this.runtime.ioDevices.video.provider.video.width
//this.video.height = this.runtime.ioDevices.video.provider.video.height
});

这一部分代码建议放在另一个方法块中 使用 如 “打开摄像头” 或者放于 class 的 constructor 中(即打开插件立刻打开摄像头) 打开摄像头的异步操作 执行到这里const canvas = faceapi.createCanvasFromMedia(this.video)时,可能并没有获取到video数据,所以报错了

目前参考你的代码 应该并没有声明(let\var\const)过 video 这个变量,是没有值的 this.video 在 👆 代码中的是赋值过的,所以不要使用video这个变量了

=========================================================

我刚刚尝试了一下 在确认已经打开摄像头 使用👇代码作为如“开始检测”块的调用方法时,可以正常运行

需开启镜像 视频数据默认与右上侧canvas反向 image

注意在调用下方的方法之前确认打开摄像头 你可以在class的 constructor 去打开摄像头,也可以使用某个块手动打开摄像头,否则会出现上面 media has not finished loading yet 的报错

    faceDetection() {
        return new Promise((resolve, reject) => {

            const originCanvas = this.runtime.renderer._gl.canvas  // 右上侧canvas
            const canvas = faceapi.createCanvasFromMedia(this.video)  // 创建用于绘制canvas

            canvas.width = 480
            canvas.height = 360

           // 将绘制的canvas覆盖于原canvas之上
            originCanvas.parentElement.style.position = 'relative'
            canvas.style.position = 'absolute'
            canvas.style.top = '0'
            canvas.style.left = '0'
            originCanvas.parentElement.append(canvas)

            // 循环检测并绘制检测结果
            this.timer = setInterval(async () => {
                const results = await faceapi
                    .detectSingleFace(this.video, new faceapi.TinyFaceDetectorOptions({ inputSize: 224, scoreThreshold: 0.5 }))
                    .withFaceLandmarks()
                    .withFaceExpressions()

                // 确认仅得到数据后进行绘制
                if (results) {
                    const displaySize = {width: 480 , height: 360}
                    const resizedDetections = faceapi.resizeResults(results, displaySize)
                    canvas.getContext('2d').clearRect(0, 0, canvas.width, canvas.height)
                    faceapi.draw.drawDetections(canvas, resizedDetections)
                    faceapi.draw.drawFaceLandmarks(canvas, resizedDetections)
                    faceapi.draw.drawFaceExpressions(canvas, resizedDetections) 
                }
                resolve('success')
            }, 1000);
        })
    }

image