Nora-Zhang98 / VTSCN

The code of Bridging Visual and Textual Semantics:Towards Consistency for Unbiased Scene Graph Generation(24TPAMI).
https://ieeexplore.ieee.org/abstract/document/10502321
MIT License
2 stars 0 forks source link

Bridging Visual and Textual Semantics: Towards Consistency for Unbiased Scene Graph Generation

This is an official implementation for paper "Bridging Visual and Textual Semantics: Towards Consistency for Unbiased Scene Graph Generation".

Our code is on top of SHA-GCL, we sincerely thank them for their well-designed codebase. You can refer to this link to build the basic environment and prepare the dataset.

Pocket package is also required, please refer to this link to for necessary packages.

For the weighted predicate embedding, the vg-version can be downloaded in this link, the gqa-version can be downloaded in this link.

Important

Our code implements VTSCN on MOTIFS, VCTree and VTransE.