EnoxSoftware / OpenCVForUnity

OpenCV for Unity (Untiy Asset Plugin)
https://assetstore.unity.com/packages/tools/integration/opencv-for-unity-21088
550 stars 172 forks source link

FLANNBASED DescriptorMatcher always returns null on knnMatch #171

Closed tv-gc closed 8 months ago

tv-gc commented 8 months ago

Hi!

I am trying to convert the markerless example to work with multiple images and correct me if i am wrong, i am required to use flannbased matcher in order for the knnmatch to use all the descriptors i have put in the list (if i use bruteforce, it will only compare 1 to 1 and stop there)

The problem is that by changing it to flannbased, i always get this error

CvException: CvType.CV_32SC2 != m.type() || m.cols()!=1 Mat [ 0*0*CV_8UC1, isCont=False, isSubmat=False, nativeObj=0x2637091694368, dataAddr=0x0 ] OpenCVForUnity.UtilsModule.Converters.Mat_to_vector_Mat (OpenCVForUnity.CoreModule.Mat m, System.Collections.Generic.List1[T] mats) (at Assets/OpenCVForUnity/org/opencv/utils/Converters.cs:336) OpenCVForUnity.UtilsModule.Converters.Mat_to_vector_vector_DMatch (OpenCVForUnity.CoreModule.Mat m, System.Collections.Generic.List1[T] lvdm) (at Assets/OpenCVForUnity/org/opencv/utils/Converters.cs:900) OpenCVForUnity.Features2dModule.DescriptorMatcher.knnMatch (OpenCVForUnity.CoreModule.Mat queryDescriptors, OpenCVForUnity.CoreModule.Mat trainDescriptors, System.Collections.Generic.List1[T] matches, System.Int32 k) (at Assets/OpenCVForUnity/org/opencv/features2d/DescriptorMatcher.cs:264)`

The queryDescriptors inside getmatches() is the same, the ones from the current webcam frame, supposedly the matcher in flannbased grabs the List < Mat > descriptors i have passed before training it, so what am i doing wrong?

Thank you !

EnoxSoftware commented 8 months ago

Hi,

We have too little specific information about the problem to start the verification process here. From the error code, it reads that the type and shape of the Mat passed as an argument is incorrect. Can you please show us a simple sample scene and code that reproduces the problem?

tv-gc commented 8 months ago

I am using the Markerless example as the base code and trying to change it to work with multiple images.

I am loading all the images and putting them in a List

 for (int i = 0; i < 8; ++i)
        {
            Mat p = new Mat();

            p = Imgcodecs.imread(Application.persistentDataPath + "/photo_" + (i + 1) + ".jpg");

            Imgproc.cvtColor(p, p, Imgproc.COLOR_RGB2GRAY);

            Texture2D patternTexture = new Texture2D(p.width(), p.height(), TextureFormat.RGBA32, false);

            //To reuse mat, set the flipAfter flag to true.
            Utils.matToTexture2D(p, patternTexture, true, 0, true);

            patternDetector.InitializePattern(p, pattern);

            patternMats.Add(p);
        }

The buildpatternfromimages was tweaked to work with the list

public bool buildPatternFromImages(List<Mat> images, Pattern pattern)
{
List<MatOfKeyPoint> keypoints = new List<MatOfKeyPoint>();
List<Mat> descriptors = new List<Mat>();

bool result = extractFeatures(images, keypoints, descriptors);

pattern.keypointsList = keypoints;
pattern.descriptorsList = descriptors;

return result;
}

The extractFeatures() now returns a list of descriptors instead of a single descriptor

private bool extractFeatures(List<Mat> images, List<MatOfKeyPoint> keypoints, List<Mat> descriptors)
{
m_detector.detect(images, keypoints);
m_extractor.compute(images, keypoints, descriptors);

return true;
}

Everything else is the same. If i maintain the DescriptorMatcher as BRUTEFORCE_HAMMING, only the first image is detected If i change it to FLANNBASED, it gives me the error. In fact, in the base code of the Markerless example, just by simply changing the type, it gives the same error so it is probably unrelated to my changes.

EnoxSoftware commented 8 months ago

It is difficult to pinpoint the cause of the bug in the entire program even if you show me a piece of code. But at least I found the cause of the error that occurs when the DescriptorMatcher matcher type is changed from BRUTEFORCE_HAMMING to FLANNBASED. It appears that the FLANNBASED DescriptorMatcher needs to be combined with SIFT descriptors instead of ORB descriptors. I have created an example of SIFT FLANN Matching working correctly, so please try it.

SIFTFLANNMatchingExample.ZIP

SIFT FLANN Matching Example:

            Mat img1Mat = Imgcodecs.imread(Utils.getFilePath("OpenCVForUnity/features2d/box.png"), Imgcodecs.IMREAD_GRAYSCALE);
            Mat img2Mat = Imgcodecs.imread(Utils.getFilePath("OpenCVForUnity/features2d/box_in_scene.png"), Imgcodecs.IMREAD_GRAYSCALE);
            Mat img3Mat = img2Mat.clone();

            //-- Step 1: Detect the keypoints using SIFT Detector, compute the descriptors
            List<Mat> images = new List<Mat>();
            List<MatOfKeyPoint> keypoints = new List<MatOfKeyPoint>();
            List<Mat> descriptors = new List<Mat>();

            // Test the input processing of multiple images.
            images.Add(img1Mat);
            images.Add(img2Mat);
            images.Add(img3Mat);

            SIFT detector = SIFT.create();
            SIFT extractor = SIFT.create();
            detector.detect(images, keypoints);
            extractor.compute(images, keypoints, descriptors);

            // Select image, keypoints, and descriptor for matching process.
            Mat img1 = images[0];
            Mat img2 = images[2];
            MatOfKeyPoint keypoints1 = keypoints[0];
            MatOfKeyPoint keypoints2 = keypoints[2];
            Mat descriptors1 = descriptors[0];
            Mat descriptors2 = descriptors[2];

            //-- Step 2: Matching descriptor vectors with a FLANN based matcher
            // Since SIFT is a floating-point descriptor NORM_L2 is used
            DescriptorMatcher matcher = DescriptorMatcher.create(DescriptorMatcher.FLANNBASED);
            List<MatOfDMatch> knnMatches = new List<MatOfDMatch>();
            matcher.knnMatch(descriptors1, descriptors2, knnMatches, 2);

            //-- Filter matches using the Lowe's ratio test
            float ratioThresh = 0.7f;
            List<DMatch> listOfGoodMatches = new List<DMatch>();
            for (int i = 0; i < knnMatches.Count; i++)
            {
                if (knnMatches[i].rows() > 1)
                {
                    DMatch[] matches = knnMatches[i].toArray();
                    if (matches[0].distance < ratioThresh * matches[1].distance)
                    {
                        listOfGoodMatches.Add(matches[0]);
                    }
                }
            }
            MatOfDMatch goodMatches = new MatOfDMatch();
            goodMatches.fromList(listOfGoodMatches);

            //-- Draw matches
            Mat resultImg = new Mat();
            Features2d.drawMatches(img1, keypoints1, img2, keypoints2, goodMatches, resultImg);

            Texture2D texture = new Texture2D(resultImg.cols(), resultImg.rows(), TextureFormat.RGB24, false);
            Utils.matToTexture2D(resultImg, texture);
            gameObject.GetComponent<Renderer>().material.mainTexture = texture;  
tv-gc commented 8 months ago

"It appears that the FLANNBASED DescriptorMatcher needs to be combined with SIFT descriptors instead of ORB descriptors."

Oh, i see. Unfortunatly SIFT is incredibly slow and i must use ORB instead if i want to achieve something useful in real time.

Would it be possible to fix this so ORB descriptors are compatible with FLANNBASED DescriptorMatchers?

Cheers!

EnoxSoftware commented 8 months ago

The matching algorithms seem to have the following limitations.

// Not all matching algorithms can be applied to all features. Some can be used and some cannot, as follows // // BruteForce (BruteForce, BruteForce-SL2, BruteForce-L1): can be used for anything // BruteForce-Hamming: can be used when the features are represented in binary code (ORB, AKAZE, etc) // FLANN: can be used when features are represented as real vectors (SIFT, SURF, etc)

tv-gc commented 8 months ago

I see, very useful information, thanks!

while i am at it, what would you recommend doing if i wanted to compare 1 image vs several at runtime to check for pose? The images being from an object being rotated 360° ?

cheers and thanks for the quick replies and extra info!

tv-gc commented 8 months ago

Hi!

I do believe there is another bug somewhere.

The ORB only works with 1 image with the descriptor matcher set to BRUTEFORCE

Regardless of how many MAT descriptors i add to the knnMatch, it will only process the first one.

would it be possible to fix or suggest a work around?

Thanks in advance!

EnoxSoftware commented 8 months ago

Sorry, I don't know the details of the bug you are trying to convey. Could you please share the script code that could reproduce the problem?

I am not an expert on Feature Matching. In order to determine that there is a bug in the behavior of the descriptor matcher, I would need a comparison with the correct results output by the opencv program written in C++ or python. Do you have the C++ or python code you refer to?

tv-gc commented 8 months ago

I don't have the c++/python code but the one i am working on is the same posted above.

The base code for markerless uses ORB, so you just need to add multiple images (like in the FLANN example you made). The knnMatch will not find matches for the ones after the first descriptor, either by just using the knnMatch with the list descriptors or by having a for loop and passing a single mat in the method call.

for (int i = 0; i < descriptorsList.Count; ++i)
{ 
Mat t = descriptorsList[i];
List<DMatch> found = new List<DMatch>();
List<MatOfDMatch> knn = new List<MatOfDMatch>();

descriptorMatcher.knnMatch(queryDescriptors, t, knn, 2);
}

or descriptorMatcher.knnMatch(queryDescriptors, knn, 2);

Either one won't work against the current camera frame Mat after the first entry. PS: i know it is not the images that are bad because if i change the order of the set of images, they all work in the 1st position.

Cheers!

tv-gc commented 8 months ago

I do believe i found the problem. The photos i took where not being converted correctly to mat, instead, i used the same method as the markerless example to take the necessary photos and now the descriptor matcher is working with multiple images.