, which means that they consider both the saliency map processed by PFPN and BasNet for evaluation.
In the result you provided, you got quite an impressive trade-off between the Utility score and the Occlusion score. But my question is, you know, since there is some difference between these two groups of masks, is it possible for your model to avoid overlapping BasNet saliency region when just providing the bounding box extracted from PFPN?
In the jupyter example you provided (namely https://github.com/microsoft/LayoutGeneration/blob/main/LayoutPrompter/notebooks/content_aware.ipynb) you wrote that
raw_path = os.path.join(RAW_DATA_PATH(dataset), split, "saliencymaps_pfpn")
, which means that you only consider the result processed by PFPN when transforming the saliency map into the bounding box.But in the evaluation code provided by PosterLayout (namely https://github.com/PKU-ICST-MIPL/PosterLayout-CVPR2023/blob/main/eval.py), it's like:
, which means that they consider both the saliency map processed by PFPN and BasNet for evaluation.
In the result you provided, you got quite an impressive trade-off between the Utility score and the Occlusion score. But my question is, you know, since there is some difference between these two groups of masks, is it possible for your model to avoid overlapping BasNet saliency region when just providing the bounding box extracted from PFPN?