Line501:Observation: {"error": "", "response": "{'messages': 'The request to the API has timed out. Please try again later, or if the issue persists, please contact the API provider', 'info': 'Your Client (working) ---> Gateway (working) ---> API (took too long to respond)'}"}
def get_backbone_model(self):
args = self.args
if args.backbone_model == "toolllama":
# ratio = 4 means the sequence length is expanded by 4, remember to change the model_max_length to 8192 (2048 * ratio) for ratio = 4
ratio = int(args.max_sequence_length/args.max_source_sequence_length)
replace_llama_with_condense(ratio=ratio)
if args.lora:
backbone_model = ToolLLaMALoRA(base_name_or_path=args.model_path, model_name_or_path=args.lora_path, max_sequence_length=args.max_sequence_length)
else:
backbone_model = ToolLLaMA(model_name_or_path=args.model_path, max_sequence_length=args.max_sequence_length)
else:
backbone_model = args.backbone_model
return backbone_model
您好,感谢开源!我用了您们开源的权重,但是inference的时候会出现很奇怪的输出。例如:
训练log:toolllama_lora_open_domain_clean_0801.log
训练log:toolllama-2-7b-v2_dfs_pipeline.log
Line67开始都是胡乱的输出。
看到 #169 提到:
数据inference_toolllama_lora_pipeline_open_domain使用的是0801版本,数据inference_toolllama_pipeline使用的是data里的。以下是模型读取部分:
不明白是哪里出了问题,希望得到解决,感激不尽!!!