HKUDS / GraphGPT

[SIGIR'2024] "GraphGPT: Graph Instruction Tuning for Large Language Models"
https://arxiv.org/abs/2310.13023
Apache License 2.0
493 stars 36 forks source link

AttributeError: 'GraphLlamaConfig' object has no attribute 'pretrain_graph_model_path' #63

Closed AstroCIEL closed 3 months ago

AstroCIEL commented 3 months ago

error met as the title. and the whole information is as follows. i wonder whether the version of pacakge "transformers" which is the newest (4.39.3), is proper?

W&B offline. Running your script from this directory will only write metadata locally. Use wandb disabled to completely turn off W&B. /media/8T3/rh_xu/GraphGPT/graphgpt/train /media/8T3/rh_xu/GraphGPT You are using a model of type llama to instantiate a model of type GraphLlama. This is not supported for all configurations of models and can yield errors. Loading checkpoint shards: 100%|██████████████████████████████████████████████████| 2/2 [00:42<00:00, 21.04s/it] /media/8T3/rh_xu/.conda/envs/graphgpt/lib/python3.8/site-packages/transformers/generation/configuration_utils.py:492: UserWarning: do_sample is set to False. However, temperature is set to 0.9 -- this flag is only used in sample-based generation modes. You should set do_sample=True or unset temperature. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed. warnings.warn( /media/8T3/rh_xu/.conda/envs/graphgpt/lib/python3.8/site-packages/transformers/generation/configuration_utils.py:497: UserWarning: do_sample is set to False. However, top_p is set to 0.6 -- this flag is only used in sample-based generation modes. You should set do_sample=True or unset top_p. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed. warnings.warn( /media/8T3/rh_xu/.conda/envs/graphgpt/lib/python3.8/site-packages/transformers/generation/configuration_utils.py:492: UserWarning: do_sample is set to False. However, temperature is set to 0.9 -- this flag is only used in sample-based generation modes. You should set do_sample=True or unset temperature. warnings.warn( /media/8T3/rh_xu/.conda/envs/graphgpt/lib/python3.8/site-packages/transformers/generation/configuration_utils.py:497: UserWarning: do_sample is set to False. However, top_p is set to 0.6 -- this flag is only used in sample-based generation modes. You should set do_sample=True or unset top_p. warnings.warn( Traceback (most recent call last): File "graphgpt/train/train_mem.py", line 18, in train() File "/media/8T3/rh_xu/GraphGPT/graphgpt/train/train_graph.py", line 801, in train model.config.pretrain_graph_model_path = model.config.pretrain_graph_model_path + model_args.graph_tower File "/media/8T3/rh_xu/.conda/envs/graphgpt/lib/python3.8/site-packages/transformers/configuration_utils.py", line 263, in getattribute return super().getattribute(key) AttributeError: 'GraphLlamaConfig' object has no attribute 'pretrain_graph_model_path' [2024-04-10 10:52:23,673] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 0 (pid: 325970) of binary: /media/8T3/rh_xu/.conda/envs/graphgpt/bin/python Traceback (most recent call last): File "/media/8T3/rh_xu/.conda/envs/graphgpt/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/media/8T3/rh_xu/.conda/envs/graphgpt/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/media/8T3/rh_xu/.conda/envs/graphgpt/lib/python3.8/site-packages/torch/distributed/run.py", line 816, in main() File "/media/8T3/rh_xu/.conda/envs/graphgpt/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/init.py", line 347, in wrapper return f(*args, **kwargs) File "/media/8T3/rh_xu/.conda/envs/graphgpt/lib/python3.8/site-packages/torch/distributed/run.py", line 812, in main run(args) File "/media/8T3/rh_xu/.conda/envs/graphgpt/lib/python3.8/site-packages/torch/distributed/run.py", line 803, in run elastic_launch( File "/media/8T3/rh_xu/.conda/envs/graphgpt/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 135, in call return launch_agent(self._config, self._entrypoint, list(args)) File "/media/8T3/rh_xu/.conda/envs/graphgpt/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 268, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

============================================================ graphgpt/train/train_mem.py FAILED

Failures:

------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2024-04-10_10:52:23 host : ubunut-System-Product-Name rank : 0 (local_rank: 0) exitcode : 1 (pid: 325970) error_file: traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ============================================================
Zengdewei1 commented 3 months ago

It's a problem resolved before, https://github.com/HKUDS/GraphGPT/issues/7. You need to modify config.json of vicuna.

AstroCIEL commented 3 months ago

thank you. it worked. now my "graphgpt_stage1.sh" is like:

model_path=../vicuna-7b-v1.5-16k instruct_ds=./data/stage1/train_instruct_graphmatch.json graph_data_path=./graph_data/graph_data_all.pt pretra_gnn=clip_gt_arxiv output_model=./checkpoints/stage_1

and my config.json of vicuna is like

{ "_name_or_path": "vicuna-7b-v1.5-16k", "architectures": [ "LlamaForCausalLM" ], "bos_token_id": 1, "eos_token_id": 2, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 11008, "max_sequence_length": 16384, "max_position_embeddings": 4096, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 32, "pad_token_id": 0, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 4.0, "type": "linear" }, "tie_word_embeddings": false, "torch_dtype": "float16", "transformers_version": "4.39.3", "use_cache": true, "vocab_size": 32000, "graph_hidden_size": 128, "pretrain_graph_model_path": "/media/8T3/rh_xu/GraphGPT/" }

note that my file structure is like /media/8T3/rh_xu/GraphGPT/clip_gt_arxiv/clip_gt_arxiv_pub.pkl.