InternLM / xtuner

An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)
https://xtuner.readthedocs.io/zh-cn/latest/
Apache License 2.0
4k stars 314 forks source link

torch编译错误 #956

Open tcxia opened 1 month ago

tcxia commented 1 month ago

10/24 18:04:14 - mmengine - WARNING - WARNING: command error: 'module 'torch.compiler' has no attribute 'is_compiling''! 10/24 18:04:14 - mmengine - WARNING - Arguments received: ['xtuner', 'train', '/mnt/pfs/jinfeng_team/LA/xiatianci/xtuner/xtuner/configs/llama/llama3_8b/llama3_8b_full_alpaca_e3.py']. xtuner commands use the following syntax:

    xtuner MODE MODE_ARGS ARGS

    Where   MODE (required) is one of ('list-cfg', 'copy-cfg', 'log-dataset', 'check-custom-dataset', 'train', 'test', 'chat', 'convert', 'preprocess', 'mmbench', 'eval_refcoco')
            MODE_ARG (optional) is the argument for specific mode
            ARGS (optional) are the arguments for specific command