zjunlp / MKG_Analogy

[ICLR 2023] Multimodal Analogical Reasoning over Knowledge Graphs
https://zjunlp.github.io/project/MKG_Analogy/
MIT License
99 stars 11 forks source link
analogical-reasoning analogy computer-vision dataset iclr iclr2023 kg knowledge-graph language-model markg mars multimodal natural-language-processing pre-trained-language-models prompt reasoning

MKG_Analogy

Code and datasets for the ICLR2023 paper "Multimodal Analogical Reasoning over Knowledge Graphs"

Quick links

Overview

In this work, we propose a new task of multimodal analogical reasoning over knowledge graph. A overview of the Multimodal Analogical Reasoning task can be seen as follows:

We provide a knowledge graph to support and further divide the task into single and blended patterns. Note that the relation marked by dashed arrows ($\dashrightarrow$) and the text around parentheses under images are only for annotation and not provided in the input.

Requirements

pip install -r requirements.txt

Data Collection and Preprocessing

To support the multimodal analogical reasoning task, we collect a multimodal knowledge graph dataset MarKG and a Multimodal Analogical ReaSoning dataset MARS. A visual outline of the data collection as shown in following figure:

We collect the datasets follow below steps:

  1. Collect Analogy Entities and Relations
  2. Link to Wikidata and Retrieve Neighbors
  3. Acquire and Validate Images
  4. Sample Analogical Reasoning Data

The statistics of the two datasets are shown in following figures:

We put the text data under MarT/dataset/, and the image data can be downloaded through the Google Drive or the Baidu Pan(TeraBox) (code:7hoc) and placed on MarT/dataset/MARS/images. Please refer to MarT for details.

The expected structure of files is:

MKG_Analogy
 |-- M-KGE  # multimodal knowledge representation methods
 |    |-- IKRL_TransAE   
 |    |-- RSME
 |-- MarT
 |    |-- data          # data process functions
 |    |-- dataset
 |    |    |-- MarKG    # knowledge graph data
 |    |    |-- MARS     # analogical reasoning data
 |    |-- lit_models    # pytorch_lightning models
 |    |-- models        # source code of models
 |    |-- scripts       # running scripts
 |    |-- tools         # tool function
 |    |-- main.py       # main function
 |-- resources   # image resources
 |-- requirements.txt
 |-- README.md

Evaluate on Benchmark Mehods

We select some baseline methods to establish the initial benchmark results on MARS, including multimodal knowledge representation methods (IKRL, TransAE, RSME), pre-trained vision-language models (VisualBERT, ViLBERT, ViLT, FLAVA) and a multimodal knowledge graph completion method (MKGformer).

In addition, we follow the structure-mapping theory to regard the Abudction-Mapping-Induction as explicit pipline steps for multimodal knowledge representation methods. As for transformer-based methods, we further propose MarT, a novel framework that implicitly combines these three steps to accomplish the multimodal analogical reasoning task end-to-end, which can avoid error propagation during analogical reasoning. The overview of the baseline methods can be seen in above figure.

Multimodal Knowledge Representation Methods

1. IKRL

We reproduce the IKRL models via TransAE framework, to evaluate on IKRL, running following code:

cd M-KGE/IKRL_TransAE
python IKRL.py

You can choose pre-train/fine-tune and TransE/ANALOGY by modifing finetune and analogy parameters in IKRL.py, respectively.

2. TransAE

To evaluate on IKRL, running following code:

cd M-KGE/IKRL_TransAE
python TransAE.py

You can choose pre-train/fine-tune and TransE/ANALOGY by modifing finetune and analogy parameters in TransAE.py, respectively.

3. RSME

We only provide part of the data for RSME. To evaluate on RSME, you need to generate the full data by following scripts:

cd M-KGE/RSME
python image_encoder.py  # -> analogy_vit_best_img_vec.pickle
python utils.py          # -> img_vec_id_analogy_vit.pickle

Firstly, pre-train the models over MarKG:

bash run.sh

Then modify the --checkpoint parameter and fine-tune the models on MARS:

bash run_finetune.sh

More training details about the above models can refer to their offical repositories.

Transformer-based Methods

We leverage the MarT framework for transformer-based models. MarT contains two steps: pre-train and fine-tune.

To train the models fast, we encode the image data in advance with this script (Note that the size of the encoded data is about 7GB):

cd MarT
python tools/encode_images_data.py

Taking MKGformer as an example, first pre-train the model via following script:

bash scripts/run_pretrain_mkgformer.sh

After pre-training, fine-tune the model via following script:

bash scripts/run_finetune_mkgformer.sh

🍓 We provide the best checkpoints of transformer-based models during the fine-tuning and pre-training phrases at this Google Drive. Download them and add --only_test in scripts/run_finetune_xxx.sh for testing experiments.

Citation

If you use or extend our work, please cite the paper as follows:

@inproceedings{
zhang2023multimodal,
title={Multimodal Analogical Reasoning over Knowledge Graphs},
author={Ningyu Zhang and Lei Li and Xiang Chen and Xiaozhuan Liang and Shumin Deng and Huajun Chen},
booktitle={The Eleventh International Conference on Learning Representations },
year={2023},
url={https://openreview.net/forum?id=NRHajbzg8y0P}
}