HKUDS / GraphGPT

[SIGIR'2024] "GraphGPT: Graph Instruction Tuning for Large Language Models"
https://arxiv.org/abs/2310.13023
Apache License 2.0
600 stars 57 forks source link

运行graphgpt_eval.sh时报错config.json missing #84

Open CigarOVO opened 2 days ago

CigarOVO commented 2 days ago

报错内容:

Traceback (most recent call last):
  File "./graphgpt/eval/run_graphgpt.py", line 244, in <module>
    run_eval(args, args.num_gpus)
  File "./graphgpt/eval/run_graphgpt.py", line 98, in run_eval
    ans_jsons.extend(ray.get(ans_handle))
  File "/root/miniconda3/envs/graphgpt/lib/python3.8/site-packages/ray/_private/auto_init_hook.py", line 24, in auto_init_wrapper
    return fn(*args, **kwargs)
  File "/root/miniconda3/envs/graphgpt/lib/python3.8/site-packages/ray/_private/client_mode_hook.py", line 103, in wrapper
    return func(*args, **kwargs)
  File "/root/miniconda3/envs/graphgpt/lib/python3.8/site-packages/ray/_private/worker.py", line 2493, in get
    raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(AssertionError): ray::eval_model() (pid=2040388, ip=172.16.10.18)
  File "/root/miniconda3/envs/graphgpt/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "./graphgpt/eval/run_graphgpt.py", line 120, in eval_model
    model = GraphLlamaForCausalLM.from_pretrained(args.model_name, torch_dtype=torch.float16, use_cache=True, low_cpu_mem_usage=True).cuda()
  File "/root/miniconda3/envs/graphgpt/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2700, in from_pretrained
    model = cls(config, *model_args, **model_kwargs)
  File "/root/GraphGPT/graphgpt/model/GraphLlama.py", line 283, in __init__
    self.model = GraphLlamaModel(config)
  File "/root/GraphGPT/graphgpt/model/GraphLlama.py", line 99, in __init__
    clip_graph, args= load_model_pretrained(CLIP, config.pretrain_graph_model_path)
  File "/root/GraphGPT/graphgpt/model/GraphLlama.py", line 54, in load_model_pretrained
    assert osp.exists(osp.join(pretrain_model_path, 'config.json')), 'config.json missing'
AssertionError: config.json missing

graphgpt_eval.sh参数设置为:

output_model=/root/GraphGPT/output/stage_2/checkpoint-50000
datapath=/root/GraphGPT/graph_data/arxiv_test_instruct_cot.json
graph_data_path=/root/GraphGPT/graph_data/graph_data_all.pt
res_path=/root/GraphGPT/output/output_stage_2_arxiv_nc

是需要修改/root/GraphGPT/output/stage_2/checkpoint-50000/config.json文件的内容吗? 我修改了其内容为

{
  "_name_or_path": "/root/vicuna-7b-v1.5-16k",
  "architectures": [
    "GraphLlamaForCausalLM"
  ],
  "bos_token_id": 1,
  "eos_token_id": 2,
  "freeze_graph_mlp_adapter": false,
  "graph_hidden_size": 128,
  "graph_select_layer": -2,
  "graph_tower": "clip_gt_arxiv",
  "hidden_act": "silu",
  "hidden_size": 4096,
  "initializer_range": 0.02,
  "intermediate_size": 11008,
  "max_position_embeddings": 4096,
  "max_sequence_length": 16384,
  "model_type": "GraphLlama",
  "num_attention_heads": 32,
  "num_hidden_layers": 32,
  "num_key_value_heads": 32,
  "pad_token_id": 0,
  "pretrain_graph_model_path": "/root/GraphGPT/pretrained_gnn/",
  "pretraining_tp": 1,
  "rms_norm_eps": 1e-05,
  "rope_scaling": {
    "factor": 4.0,
    "type": "linear"
  },
  "sep_graph_conv_front": false,
  "tie_word_embeddings": false,
  "torch_dtype": "float32",
  "transformers_version": "4.31.0",
  "tune_graph_mlp_adapter": true,
  "use_cache": false,
  "use_graph_proj": true,
  "use_graph_start_end": true,
  "vocab_size": 32003
}

修改了GraphGPT/output/stage_2/checkpoint-50000/config.json中的"pretrain_graph_model_path",还是报错config.json missing,是应该再修改哪里呢?

CigarOVO commented 1 day ago

通过 #14 的方式解决了,将/root/GraphGPT/pretrained_gnn/clip_gt_arxiv 文件夹 直接放在了 GraphGPT目录下 现在我的GraphGPT/output/stage_2/config.json文件内容为

{
  "_name_or_path": "/root/vicuna-7b-v1.5-16k",
  "architectures": [
    "GraphLlamaForCausalLM"
  ],
  "bos_token_id": 1,
  "eos_token_id": 2,
  "freeze_graph_mlp_adapter": false,
  "graph_hidden_size": 128,
  "graph_select_layer": -2,
  "graph_tower": "clip_gt_arxiv",
  "hidden_act": "silu",
  "hidden_size": 4096,
  "initializer_range": 0.02,
  "intermediate_size": 11008,
  "max_position_embeddings": 4096,
  "max_sequence_length": 16384,
  "model_type": "GraphLlama",
  "num_attention_heads": 32,
  "num_hidden_layers": 32,
  "num_key_value_heads": 32,
  "pad_token_id": 0,
  "pretrain_graph_model_path": "/root/GraphGPT/clip_gt_arxiv",
  "pretraining_tp": 1,
  "rms_norm_eps": 1e-05,
  "rope_scaling": {
    "factor": 4.0,
    "type": "linear"
  },
  "sep_graph_conv_front": false,
  "tie_word_embeddings": false,
  "torch_dtype": "float32",
  "transformers_version": "4.31.0",
  "tune_graph_mlp_adapter": true,
  "use_cache": false,
  "use_graph_proj": true,
  "use_graph_start_end": true,
  "vocab_size": 32003

文件目录为/root/GraphGPT/clip_gt_arxiv/clip_gt_arxiv_pub.pkl 因为第一阶段训练文件这样嵌套才不会出问题,eval的时候得重新放出来

xvrrr commented 1 day ago

可以参考 #关于找不到config.json文件 #14