microsoft / table-transformer

Table Transformer (TATR) is a deep learning model for extracting tables from unstructured documents (PDFs and images). This is also the official repository for the PubTables-1M dataset and GriTS evaluation metric.
MIT License
2.01k stars 231 forks source link

Question regarding postprocessing #159

Open NielsRogge opened 6 months ago

NielsRogge commented 6 months ago

Hi,

I do have a question regarding postprocessing the logits of TATR models. In the inference script, one takes a softmax over the last dimension in order to get probabilities over all classes, then gets the max indices/values as seen here. Finally, one filters out predictions which are not of class "no object" as seen here.

However, in the PostProcess method (which I assume is used during evaluation), one applies a softmax over the last dimension, followed by removing the probabilities of the "no object" class before taking the max indices/values as seen here.

Is this intended? Any clarification would be greatly appreciated.

bsmock commented 6 months ago

Great observation, I see the discrepancy you are referring to.

The key question is: what do we do when "no-object" is the most probable class prediction? Do we suppress/filter out such outputs, or do we treat them as predictions for the second most probable class, which is the most probable class that isn't "no-object"?

I'll start out by saying that personally it doesn't seem to me that there is an obvious answer to which convention is the "right" one. So the question of which convention to choose might end up being an empirical/practical one.

The COCO evaluation convention appears to follow the second convention: keep all predictions and assign each prediction the most probable object class label that was produced. Note that this will always be less than or equal to 0.5 if "no-object" is actually the most probable class predicted. We use the code created by the original DETR authors to follow this convention for computing COCO metrics.

In our inference code, we follow the other convention: always assign the most probable class to the prediction, even if it is "no-object". As a result, we can filter out predictions that are labeled "no-object", rather than treat them as a low-probability object class prediction. This is intentional, but we did not empirically study the difference it has in practice, if any, with the other convention.

Do you have a preference or an argument for always favoring one convention over the other?

Best, Brandon