jzbontar / mc-cnn

Stereo Matching by Training a Convolutional Neural Network to Compare Image Patches
BSD 2-Clause "Simplified" License
707 stars 232 forks source link

Different metrics to evaluate the dataset #62

Open NamburiSrinath opened 5 years ago

NamburiSrinath commented 5 years ago

Hi jzbontar,

We have a set of stereo images that has been tested on the .lua code provided and the disparity maps are obtained. From visualization, we can say whether it is giving good results or not. But are there any metrics which can actually evaluate the dataset.

An example can be as follows:

"For object detection/ classification, we have metrics such as accuracy, precision, recall, F1-score, IoU, mAP, confusion matrix etc.. What are the metrics for stereo? Are they implemented in your code and if so, how to use them?"

Thanking you Srinath