NJACKWinterOfCode / matcher-core

Pattern-mining module for detecting key-points in images 🎈 Maintained @
https://github.com/publiclab/matcher-core
GNU General Public License v3.0
2 stars 1 forks source link

matcher-core: ORB-focused pattern-miner for PublicLab

LICENSE

Installation

Simply do, npm i matcher-core.

Also, when using ARM based devices, its highly recommended to additionally install the packages: libxss1 libx11-xcb1 libxcomposite1 libxcursor1 libxdamage1 libxfixes3 libxi6 libxtst6 libnss3 libgconf-2-4 libxrandr2 libasound2 libatk1.0-0 libgtk2.0-0 libgtk-3-0 libxinerama1 libcairo-gobject2 that can be easily installed using npm run fetch.

Description

matcher-core essentially employs the ORB(Oriented FAST and Rotated BRIEF) algorithm to mine patterns using the well-known FAST(Features from Accelerated Segment Test) keypoint detector and the BRIEF(Binary Robust Independent Elementary Features) descriptor technologies, which provide appreciable performance boosts on low computational costs. The main advantages, without going in too deep into details, of building this module around ORB were as follows.

Overview

The process of generating matches takes two phases; finding and matching. finding, or identifying interest points in an image, is done using the findPoints method. It passes a cornersArray to the global utils object's points object property, which can be stored for later use. finding will take a few hundred milliseconds for images of standard sizes (~720p).

Please note that the "global utils object" mentioned above is returned as a parameter to the callback function from where it can be accessed. See this example:

  new Matcher('path/to/image1.png', 'path/to/image2.png',
    async function (r) { // r here is the passed utils object
      res = await r;
      console.log(res.points);
      console.log(res.matched_points);
    });

The output (res.points) is in the following format:

[{"x":37,"y":261},
 {"x":482,"y":402},
 {"x":84,"y":331}, ...]

matching is done with the findMatchedPoints function, which passes a matchesArray to the global utils object's matched_points object property with the following format (res.matched_points):

[{"confidence":{"c1":63,"c2":187},"x1":359,"y1":48,"x2":65,"y2":309,"population":9},
 {"confidence":{"c1":124,"c2":169},"x1":260,"y1":333,"x2":546,"y2":295,"population":9}, ...]

It runs slower than the point finding step due to the added computational overhead of comparing both of the images for matches.

The findMatchedPoints function depends upon the values served back into its lexical scope by the findPoints function, which in turn depends upon the params argument (see below) supplied by the user, and is solely responsible for the generation of the cornersArray, which is used to instantiate the matchesArray. The findMatchedPoints, is called here and the appropriate values are set in the cache.

Arguments

This library takes a set of different options whose expanded map is provided below. For more information about these options, checkout the codeflow section of this documentation below.

  new Matcher(<Image>, <Image>,
    <Object(callback)>, {
      query: <String>,
      caching: <Bool>,
      leniency: <Integer>,
      dimensions: <Object(array)>,
      params: {
        blur_size: <Integer>
        matchThreshold: <Integer>
        lap_thres: <Integer>,
        eigen_thres: <Integer>
      }
    });

All arguments other than the ones mentioned below (images and callback function) are required to be initialized.

Setup

Node

Promise.resolve(require('matcher-core')).then(fetchPoints);
function fetchPoints(results) { /* ... */ }

Browser

Extra

const instance = new orbify(<Image>, <Image>, <Object>, <Object>);
                        /*  ImageX^  ImageY^  callback^   args^ */
/*
* returns a set of detected corners
* and the set of 'rich' matches that
* are evaluated from it
*/
> {points: Array(9), matched_points: Array(500)}
/* which are formatted as depicted below */
{
  "matches": [
    {
      "confidence": {
        "c1": 63,
        "c2": 187
      },
      "x1": 359,
      "y1": 48,
      "x2": 65,
      "y2": 309,
      "population": 9
        },
        ...
    ],
    "corners": [
    {
      "x": 37,
      "y": 261
        },
        ...
    ]
}

Note: The coordinates returned above are respective of the image-pixel space, hence are independent of their surrounding canvas spaces. In simpler terms, these coordinates actually represent the pixel numbers (of images in their own x-y spaces) on both axes in an image, wherever a point of interest is found.

Demonstration

The live-demonstration of library can be found on the gh-pages branch deployed here.

Building from source

Codeflow

Below is a summary of each component of the orbify function, which is the at the core of this library and returns a promise, which should be kept in mind while extending the repository into one's own.

All this being said, if you still have any questions regarding matcher-core's implementations, feel free to open an issue clearly specifying your doubts, and pinging me (@rexagod) in the issue discription.

LICENSE

GNU-General-Public-License-v3.0