ShunChengWu / SceneGraphFusion

BSD 2-Clause "Simplified" License
160 stars 26 forks source link

Question on Evaluation Metrics #28

Closed stevenAce13 closed 1 year ago

stevenAce13 commented 1 year ago

Thanks for your work and time.

Just would like to have a quick confirmation on three R@k evaluation metrics (Object Class Prediction, Predicate Prediction, and Relationship Prediction) used in your work [1] and 3DSSG [2]. Are they aligned with the popular evaluation metrics used in ImgVRD (such as PredCls, PhrDet, SGCls, SGGen and etc)? Please see my thoughts below and your comments are warmly welcomed!!

More specifically, Given detected objects (which is achieved by class-agnostic instance mask I suppose), we compute: 1) Object Class Prediction R@k: Is that actually the Top-K accuracy (i.e., for example Top-K Accu in Image Classification)? 2) Predicate Prediction R@k: Is this PredCls? Or is this another Top-K Accuracy, by enumerating all possible 8 or 26 predicate_classes between a specific object pair? 3) Relationship Prediction R@k: Is this SGCls? Or is this another Top-K score, by taking both (1) and (2) into consideration via triplet-multiplication?

[1] SceneGraphFusion: Incremental 3D Scene Graph Prediction from RGB-D Sequences [2] Learning 3D Semantic Scene Graphs from 3D Indoor Reconstructions

ShunChengWu commented 1 year ago

We followed the evalution method in 3DSSG. The number of top-K prediction over the number of all predictions are calculated. The output value is top-k precision.