benhamner / Metrics

Machine learning evaluation metrics, implemented in Python, R, Haskell, and MATLAB / Octave
Other
1.63k stars 453 forks source link

Note: the current releases of this toolbox are a beta release, to test working with Haskell's, Python's, and R's code repositories.

Build Status

Metrics provides implementations of various supervised machine learning evaluation metrics in the following languages:

For more detailed installation instructions, see the README for each implementation.

EVALUATION METRICS

Evaluation MetricPythonRHaskellMATLAB / Octave
Absolute Error (AE)
Average Precision at K (APK, AP@K)
Area Under the ROC (AUC)
Classification Error (CE)
F1 Score (F1)
Gini
Levenshtein
Log Loss (LL)
Mean Log Loss (LogLoss)
Mean Absolute Error (MAE)
Mean Average Precision at K (MAPK, MAP@K)
Mean Quadratic Weighted Kappa
Mean Squared Error (MSE)
Mean Squared Log Error (MSLE)
Normalized Gini
Quadratic Weighted Kappa
Relative Absolute Error (RAE)
Root Mean Squared Error (RMSE)
Relative Squared Error (RSE)
Root Relative Squared Error (RRSE)
Root Mean Squared Log Error (RMSLE)
Squared Error (SE)
Squared Log Error (SLE)

TO IMPLEMENT

HIGHER LEVEL TRANSFORMATIONS TO HANDLE

PROPERTIES METRICS CAN HAVE

(Nonexhaustive and to be added in the future)