ericblade / quagga2

An advanced barcode-scanner written in Javascript and TypeScript - Continuation from https://github.com/serratus/quaggajs
MIT License
758 stars 85 forks source link

fixed scanning canvas #376

Open M-Barari opened 3 years ago

M-Barari commented 3 years ago

Hi, first of all thank you and second this is more a question than an issue. I'm trying to build something like cellphones native scanner; full screen camera output, with a rectangle in center that every time scan is success it will show a red line-through . I turned locate: false and used area property to make the canvas center ({top:"35%", bottom:"35%", left:"10%", right:"10%"}) . but it cause two issues: 1) I think it will mess up the actual scanning zone ; 2) it definitely mess up the success canvas (rectangle and line-through will be shifted to bottom-right). so how can I achieve that? and how can i get the success red line-through to remain for a defined time (right now it will scan after scan) without new scanning so i can decide to scan again or do something else?

Oh, btw , I'm on nuxt 2.15.4 and here's my code:

<template>
    <div>
      <div id="interactive" class="viewport"></div>
      {{code}}
    </div>
</template>

<style lang="scss">
#interactive.viewport {
  width: 640px;
  height: 480px;
}
#interactive.viewport canvas, video {
  float: left;
  width: 640px;
  height: 480px;
}
#interactive.viewport canvas.drawingBuffer, video.drawingBuffer {
  margin-left: -640px;
}
@media (max-width: 603px) {
  #interactive.viewport {
    width: 100vw;
    height: auto;
    overflow: hidden;
  }
  #interactive.viewport canvas, video {
    width: 100vw;
    height: auto;
  }
  #interactive.viewport canvas.drawingBuffer, video.drawingBuffer {
    margin-left: -100vw;
  }
}
</style>

<script>
import Quagga from '@ericblade/quagga2';
export default {
  data() {
    return {
      code:[]
    };
  },
  methods:{
    async checkDevice() {
      let md = navigator.mediaDevices;
      if (!md || !md.enumerateDevices) return false;
      const devices = await navigator.mediaDevices.enumerateDevices();
      return devices.some((device) => "videoinput" === device.kind);
    },
    initQuagga(){
      Quagga.init({
        inputStream : {
          name : "Live",
          type : "LiveStream",
          area: {
            top: "35%",
            right: "10%",
            left: "10%",
            bottom: "35%",
          },
        },
        locate: false,
        decoder : {
          readers : [
            "ean_reader",
          ],
        }
      },(err)=>{
        if(err){
          return
        }
        // this.checkCapabilities();
        this.startQuagga()
      });
    },
    // checkCapabilities(){
    //   var track = Quagga.CameraAccess.getActiveTrack();
    //   var capabilities = {};
    //   if (typeof track.getCapabilities === 'function') {
    //       capabilities = track.getCapabilities();
    //   }
    //   this.applySettingsVisibility('zoom', capabilities.zoom);
    //   this.applySettingsVisibility('torch', capabilities.torch);
    // },
    // applySetting: function(setting, value) {
    //     var track = Quagga.CameraAccess.getActiveTrack();
    //     if (track && typeof track.getCapabilities === 'function') {
    //         switch (setting) {
    //         case 'zoom':
    //             return track.applyConstraints({advanced: [{zoom: parseFloat(value)}]});
    //         case 'torch':
    //             return track.applyConstraints({advanced: [{torch: !!value}]});
    //         }
    //     }
    // },
    startQuagga(){
      Quagga.start()
      Quagga.onProcessed(function (result) {
        let drawingCtx = Quagga.canvas.ctx.overlay,
          drawingCanvas = Quagga.canvas.dom.overlay;
        if (result) {
          if (result.boxes) {
            drawingCtx.clearRect(0, 0, parseInt(drawingCanvas.getAttribute("width")), parseInt(drawingCanvas.getAttribute("height")));
            result.boxes.filter(function (box) {
              return box !== result.box;
            }).forEach(function (box) {
              Quagga.ImageDebug.drawPath(box, {x: 0, y: 1}, drawingCtx, {color: "#00F", lineWidth: 2});
            });
          }
          if (result.box) {
            Quagga.ImageDebug.drawPath(result.box, {x: 0, y: 1}, drawingCtx, {color: "#008", lineWidth: 2});
          }

          if (result.codeResult && result.codeResult.code) {
            Quagga.ImageDebug.drawPath(result.line, {x: 'x', y: 'y'}, drawingCtx, {color: 'red', lineWidth: 3});
          }
        }
      })
      Quagga.onDetected(this.onDetected);
    },
    onDetected(data) {
      let barCodeData = data.codeResult.code;
      console.log(barCodeData);
    },
  },
  async mounted(){
    let data = await this.checkDevice();
    if (data) {
      this.initQuagga();
    }
  },
  beforeDestroy(){
    Quagga.stop()
  }
}
github-actions[bot] commented 3 years ago

Thank you for filing an issue! Please be patient. :-)

ericblade commented 3 years ago

I wish I could say that I have an exact answer, but I don't -- different mobile devices seem to operate very differently on how this works, sometimes. Or perhaps it's widely varying versions of Android/iOS that are out there in the field.

I can't say that I have a solution for the canvas part that I'm even particularly happy with, although it generally works for my purposes, it completely freaks out if you do something like, rotate your device to landscape mode.

but, what i did, was i started out just making a html page with a video and a canvas element, i got css to size them and position them correctly on top of each other, verified that i was able to draw to the canvas and it would appear where i wanted to.

then i took those two elements and their css, and i put them into my app, and pass the video to quagga init as the viewport.

This got me reasonably close, although it worked pretty poorly on my previous cell phone, as Android 7 or 8 or whatever version it had, would only give me a video view that was the exact dimensions of the camera view, without resizing it to fit the display, so my canvas would never maintain the same size as the video, even though the canvas was contained by the video at the time.

I agree that there should be a better sample that puts together something for this quite common case, and I hope that I will get some time to actually work on that, especially since I'd love to support area based scanning in my own application. In the meantime, though, I suggest just getting it to display exactly like you want in a completely bare HTML/CSS environment, and then bringing it into your code app.

As far as stopping scanning briefly, to display some kind of indicator, or what not, that's a lot more specific to the intent of your application -- my application, for example, every frame, draws the code status with boxes and such, but when it detects a code, it runs that code through an error check, and if it finds something that it thinks is likely to be an actual code, it plays a beep noise, and shuts off the scanner.

If you just want to display a status message or icon or red line or whatever for a few seconds, you should be able to call Quagga.pause(), followed by Quagga.start() again (although i'm just now realizing that that's actually part of the API that I probably haven't ever tested, so if you try that, let me know if it works) when you're ready to resume processing.