YaooXu / GoG

Generate-on-Graph: Treat LLM as both Agent and KG for Incomplete Knowledge Graph Question Answering. EMNLP 2024 Main
31 stars 2 forks source link

Question about examples in prompts #7

Open rainbow18 opened 3 weeks ago

rainbow18 commented 3 weeks ago

Hi, I have some questions:

  1. How can I check whether the name_to_id service (corresponding to src/bm25_name2ids.py) started correctly, is this step import in the experiments (if failed, what will happen)?
  2. I run some examples in the dataset webqsp, and returns triples as follows if the action is search: "records": [ { "i": 1, "thought": "I need to find who influenced Samuel Taylor Coleridge. ", "action": "Search[Samuel Taylor Coleridge]", "triples": [ "m.015h69, influence_node.influenced_by, m.078w2", "m.02dtcr, influence_node.influenced_by, m.078w2", "m.04_by, influence_node.influenced_by, m.078w2", "m.072dv, influence_node.influenced_by, m.078w2", "m.078w2, author.school_or_movement, m.06hsk", "m.078w2, author.school_or_movement, m.09g39h", "m.078w2, influence_node.influenced_by, m.015n8", "m.078w2, influence_node.peers, m.02nc24x", "m.078w2, influence_node.peers, m.02nc2qd", "m.078w2, influence_node.peers, m.03q0vk9" ], is this (the head and tail entities are started with m.) correct?
  3. How do you create your examples, i.e., where are the questions and the output results of the each step come from. Can you provide some advices? For example, how do you get the generated triples in the dir '/prompts_v2/examples'?

Thanks a lot!

YaooXu commented 3 weeks ago

Sorry for the late reply; I've been busy lately.

  1. You can use lsof -i:18891 to see if there is a service using this port. The reason we need this service is that GoG might generate some new entities which do not appear in the subgraphs, and we do not know their mids. Therefore, we need this service to get the entity mid from the KGs.

  2. Yes, they are correct, you can also refer to prob_results/cwq/gpt-3.5-turbo-0613/10_3_0.7_data_with_ct_0.2_predictions.json.

  3. The examples in /prompts_v2/examples are manually designed based on the model's output and content from the knowledge graph (KG). The goal is to enable the model to mimic this question-answering style. You can also refer to the file mentioned above.