HKUDS / GraphGPT

[SIGIR'2024] "GraphGPT: Graph Instruction Tuning for Large Language Models"
https://arxiv.org/abs/2310.13023
Apache License 2.0
493 stars 36 forks source link

Issue in train_graph.py fail to load the model #64

Closed GoldyMoon closed 3 months ago

GoldyMoon commented 3 months ago

The following code seems to have a problem on my end using HPC.

if model_args.graph_tower is not None: model = GraphLlamaForCausalLM.from_pretrained( model_args.model_name_or_path, cache_dir=training_args.cache_dir, bnb_model_from_pretrained_args ) ## TODO: add real Graph Llama model else: model = transformers.LlamaForCausalLM.from_pretrained( model_args.model_name_or_path, cache_dir=training_args.cache_dir, bnb_model_from_pretrained_args )

I get the error: Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s] Traceback (most recent call last): File "/home/xx/.local/lib/python3.9/site-packages/transformers/modeling_utils.py", line 532, in load_state_dict return torch.load( File "/home/xx/.local/lib/python3.9/site-packages/torch/serialization.py", line 1027, in load raise pickle.UnpicklingError(UNSAFE_MESSAGE + str(e)) from None _pickle.UnpicklingError: Weights only load failed. Re-running torch.load with weights_only set to False will likely succeed, but it can result in arbitrary code execution.Do it only if you get the file from a trusted source. WeightsUnpickler error: Unsupported operand 118

Is it cause by the possible incorrect position of the checkpoints folders I placed? (Like Vicunna v1.5, data, graph_data,clip_gt_arxiv)

GoldyMoon commented 3 months ago

It seems it has something to do with the lack of git-LFS on the HPC. Is git-lfs required for this project? Thank you in advance

GoldyMoon commented 3 months ago

Yes, I should install lfs before download the vicunna model