HKUDS / GraphGPT

[SIGIR'2024] "GraphGPT: Graph Instruction Tuning for Large Language Models"
https://arxiv.org/abs/2310.13023
Apache License 2.0
493 stars 36 forks source link

Some questions about input embedding of graph tokens #53

Closed brothermaster closed 3 months ago

brothermaster commented 4 months ago

All input embeddings were trained in the "initialize_graph_tokenizer" method at Self-Supervised Instruction Tuning stage , but only "-num_new_token" were loaded in Task-Specific Instruction Tuning stage while embedding of other tokens are not used. Question 1: Why don't you use all embeddings that were trained?
Question 2: Actually, embeddings of "num_new_token" of graph are not input into LLM.Text embeddings and aligned graph representation are concatenated and input into LLM. Why do you train graph tokens?

zhuiyue233 commented 3 months ago

All input embeddings were trained in the "initialize_graph_tokenizer" method at Self-Supervised Instruction Tuning stage , but only "-num_new_token" were loaded in Task-Specific Instruction Tuning stage while embedding of other tokens are not used. Question 1: Why don't you use all embeddings that were trained? Question 2: Actually, embeddings of "num_new_token" of graph are not input into LLM.Text embeddings and aligned graph representation are concatenated and input into LLM. Why do you train graph tokens?

您好,我想请问一下,您知道Graph token是怎么得到的么

brothermaster commented 3 months ago

All input embeddings were trained in the "initialize_graph_tokenizer" method at Self-Supervised Instruction Tuning stage , but only "-num_new_token" were loaded in Task-Specific Instruction Tuning stage while embedding of other tokens are not used. Question 1: Why don't you use all embeddings that were trained? Question 2: Actually, embeddings of "num_new_token" of graph are not input into LLM.Text embeddings and aligned graph representation are concatenated and input into LLM. Why do you train graph tokens?

您好,我想请问一下,您知道Graph token是怎么得到的么

在这个位置:https://github.com/HKUDS/GraphGPT/blob/3001b031c084845ac9a255a98ae6e680beaed41a/graphgpt/model/GraphLlama.py#L390

zhuiyue233 commented 3 months ago

All input embeddings were trained in the "initialize_graph_tokenizer" method at Self-Supervised Instruction Tuning stage , but only "-num_new_token" were loaded in Task-Specific Instruction Tuning stage while embedding of other tokens are not used. Question 1: Why don't you use all embeddings that were trained? Question 2: Actually, embeddings of "num_new_token" of graph are not input into LLM.Text embeddings and aligned graph representation are concatenated and input into LLM. Why do you train graph tokens?

您好,我想请问一下,您知道Graph token是怎么得到的么

在这个位置:

https://github.com/HKUDS/GraphGPT/blob/3001b031c084845ac9a255a98ae6e680beaed41a/graphgpt/model/GraphLlama.py#L390

好诶,感谢!