Hi, note that we release delta patches of our trained checkpoints instead of off-the-shelf checkpoints to comply with llama's license. Please follow the instructions here to merge the weights before inference.
Hi, note that we release delta patches of our trained checkpoints instead of off-the-shelf checkpoints to comply with llama's license. Please follow the instructions here to merge the weights before inference.
从huggingface现在alpaca的模型之后 按照torchrun --nproc-per-node=1 demos/single_turn.py --llama_config /root/autodl-tmp/model/LLaMA2-Accessory/config/7B_params.json --tokenizer_path /root/autodl-tmp/model/LLaMA2-Accessory/config/tokenizer.model --pretrained_path /root/autodl-tmp/model/LLaMA2-Accessory/finetune/sg/alpaca 官网提供的命令运行,可以正常启动gradio,但是模型输出的是无意义的内容