sloisel / numeric

Numerical analysis in Javascript
http://www.numericjs.com/
Other
1.42k stars 177 forks source link

More example applications #11

Open sloisel opened 11 years ago

sloisel commented 11 years ago

Example applications: I have small toy demos in the workshop but if there are larger applications outside the workshop, it would be nice to link to them.

darcyparker commented 11 years ago

One idea to consider: An app for pair-wise-comparison using Analytic Hierarchy Method is one thought. You could have a list of candidates (images, ideas, todo list) and have people compare each pair as better/worse or important/unimportant and finally rank them using AHM (it's an eigen problem) and test for consistency in responses... (I think hot-or-not did this... but you could do something else too.) Let me know what you think of an idea like this to demo the math capabilities.

sloisel commented 11 years ago

I am not familiar with this but it would make a nice demo!

gregglind commented 11 years ago

@darcyparker darcyparker, do you have a link for AHM... All I find for references is http://en.wikipedia.org/wiki/Analytic_hierarchy_process , which seems be a multistage model.

I don't see any work there on how to use rank-comparisons. What's the algorithm?

darcyparker commented 11 years ago

The AHM === AHP. Some notes I have:

Consider a vector of metrics. For a small vector of metrics, it is probably not hard to come up with some relative weights. For a larger set of metrics, it is helpful to use AHM/AHP to calculate the appropriate weights.

First create a comparison matrix (M) that has the relative importance of each combination of pairwise comparisons of each metric. The rows and columns represent the metrics, where position i,j compare metric i with metric j.

If the comparisons are consistent, then 1) Mi,i=1 (the diagonal of the matrix is filled with ones) 2) Mi,j=1/Mj,i 3) Mi,j=Mi,k*Mk,j for an i,j, and k.

The first 2 criteria are easy to impose when asking people to do the pair-wise comparisons. (Have people fill out a cell in the upper or lower triangle matrix, and then calculate the opposite side using criteria 2. (Mi,j=1/Mj,i).)

The third condition is harder constraint to impose on the people giving the pairwise comparisons... Usually people will give close to consistent pair wise comparisons. But they can be off. For example: For 3 metrics, someone may say metric A is better than B, and B is better than C, and A is better than C. This is a consistent set of comparisons. But they may also say A is better than B, B is better than C, and C is better than A. (This is inconsistent... and is a more extreme case of being inconsistent for illustration purposes. How can C be better than A if A is better than B and B is better than C. But inconsistencies happen at times. You may remember the website Hot-or-Not? I believe they were doing a similar numerical analysis to this... When comparing two people side by side it is not hard to imagine how sometimes people are inconsistent in how they rate/compare people 2 at a time.) After asking people to do the pair-wise comparisons while imposing the first 2 constraints, you can do a quick test to measure how consistent the responses are... (I will describe this at the end.)

The relative weights for each metric can be calculated using the eigenvector of the max eigenvalue of the pair-wise comparison matrix. If the matrix is consistent, then the following will be true about the eigenvalues. 1) They will all be Real numbers 2) All will be zero except for 1 eigenvalue. (The 1 eigenvalue should == n, where n is the number of metrics.)

You then want to calculate the eigenvector for the non-zero eigenvalue. However identifying this non-zero eigenvalue is trickier if the matrix is not consistent. To deal with imperfect consistency in the responses, you will notice that you will get some complex eigenvalues instead of 'zero'. But if the inconsistency is small, you will see that the abs value of all but one of the eigenvalues is close to zero. Here's what I do: Find the maximum of the set of the magnitudes of each eigenvalue. Then calculate the eigenvector of the corresponding largest eigenvalue. This eigenvector should be normalized so its values sum to 1.0. (Divide the eigenvector by the sum of the eigenvector's components. (Not the magnitude of the eigenvector, but the sum of its components.)) The values of this normalized eigenvector represent the relative weights of each metric. You can then weigh each metric against this weighting vector in order to get a single 'score' or rating.

To see how consistent your responses were, you can calculate a consistency 'index'. For example a popular consistency index formula is: (Max(EigenValue) - n) / n-1, where n is the number of metrics in the initial vector... the dimensions of the matrix M is also n x n...

Then you can compare this consistency index to the index of a randomly generated comparison matrix. Your consistency index relative to a large set of consistency indices calculated from random matrices should be small. (ie ConsitencyIndex(tested) / ExpectedValueofConsistencyIndexForRandomComparisonMatrix should be less than 0.1. or a small number...)

Perfect consistency would yield an index of 0. Or you would notice that all eigenvalues = 0 except 1 case. Or of course you could test if the 3rd constraint is satisfied. (Mi,j=Mi,k*Mk,j for any i,j, and k combination.)