This pull request fixes counting tables metric for three cases:
False Negatives: when table exist in ground truth but any of the predicted tables doesn't match the table, the table should count as 0 and the file should not be completely skipped (before it was np.NaN).
False Positives: When there is a predicted table that didn't match any ground truth table it should be counted as 0, right now it is skipped in processing (matched_indices==-1)
The file should be completely skipped only if there is no tables in ground truth and in prediction
In short we can say that previous metric calculation didn't consider OD mistakes
This pull request fixes counting tables metric for three cases:
In short we can say that previous metric calculation didn't consider OD mistakes