MorganGrundy / MosaicMagnifique

Application for generating Photomosaics. Turn your images into beautiful Photomosaics.
https://morgangrundy.github.io
GNU General Public License v3.0
15 stars 0 forks source link

[REQUEST] Feature Matching for cell best fit #8

Closed MorganGrundy closed 4 years ago

MorganGrundy commented 4 years ago

Is your feature request related to a problem? Please describe. With larger cells while the colour is mostly preserved the shape is lost. As cell size increases the original image quickly becomes lost in the Photomosaic.

Describe the solution you'd like Currently cells best fit are chosen purely from colour difference, they should in addition consider the features of the images. Some form of feature detection + matching can be used to determine how similar the cells are.

MorganGrundy commented 4 years ago

I have been experimenting with multiple possible methods but so far not much success.

  1. First tried using ORB feature detector. Problems:

    • Lots of cells ended up with no features detected. While some cells were expected, many others were not.
    • The features detected by ORB were rather limited for this purpose, lots of valuable spacial data was ignored.
  2. To try and remedy the previous problems I decided to try and extract shape from the images and compare them. I used a Canny edge detector and then extracted contours from the edge images using cv::findContours(). The cell contours were sorted by decreasing length and then a greedy algorithm was used to match each with the most similar library contour (using cv::matchShapes() for similarity metrics). Problems:

    • Similar to 1. lots of cells had no contours detected.
    • cv::matchShapes() returns double max in some cases, and in my tests seemed to happen a large majority of the time which is not very useful.
    • Contours that are not matched need some metric. Difficult to choose a value for this considering the return of cv::matchShapes() is somewhat unclear.
    • cv::matchShapes() is rotation and scale invariant which I didn't realise until after I tried it. Also not sure it cares about position.
  3. Sticking to the idea of comparing contours, but avoiding cv::matchShapes(). Possible comparison methods:

    • Fréchet Distance
    • Dynamic Time Warping I went with Dynamic Time Warping with a euclidean distance formula. Problems:
    • Obviously still the problem of cells with no contours detected.
    • Still need some metric for unmatched contours, although it is much easier to choose.

Still the results were not very good. In hindsight the greedy algorithm for contour matching was probably a poor choice. I want to revisit this and instead allow multiple matches per contour.

MorganGrundy commented 4 years ago
  1. While doing research learnt about frequency space and DFT (Discrete Fourier Transform). Decided to try converting images to frequency space and then comparing their magnitude spectrum. Didn't expect much from this but tried it anyway. Results were not very good.
MorganGrundy commented 4 years ago

More research, tried to find some information on Robert Silvers' method. Found code in the patent description although had a hard time reading it as all the newline characters seem to be missing. From his code and patent description it seems his method is somewhat similar to mine. Honestly I was expecting more as I had seen his Photomosaics referred to as preserving colour, shape, texture, and other qualities. While my Photomosaics do preserve shape to some extent, I find that it is quickly lost with larger cell sizes.

I will continue researching and experimenting to find some method for this. Though I now expect the impact of the method to be less noticeable than I was originally hoping for.

MorganGrundy commented 4 years ago
  1. I used a Canny edge detector to get the image edge maps, then inverted so the edges are black instead of white. I then used a distance transform to get an image where each pixel represents the distance to the closest edge. The sum of the absolute difference between the cell and library distance transform was used as the metric. Analysis:
    • With large cells, you can see that some of the images have a similar shape to the original image. However there is not enough shape preserved to see the image as a whole.
    • With smaller cells, there does not appear to be any significant shape preservation.
    • Tried to integrate this method with the colour difference, results were that good qualities of both methods were lost.

This method is far from perfect but so far has provided the best results. Considering that I do not think that feature/shape matching will be very viable, atleast in current conditions. To be viable I believe some of the following would be needed:

All of which come with their own problems. So I will close this issue for now, possibly revisiting it in the future.