Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
`low_cpu_mem_usage` was None, now set to True since model is quantized.
Traceback (most recent call last):
File "/home/kissoul/WORKDIR/LLaMA-Factory/scripts/pissa_init.py", line 83, in <module>
fire.Fire(quantize_pissa)
File "/home/kissoul/miniconda3/envs/lf/lib/python3.11/site-packages/fire/core.py", line 143, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/kissoul/miniconda3/envs/lf/lib/python3.11/site-packages/fire/core.py", line 477, in _Fire
component, remaining_args = _CallAndUpdateTrace(
^^^^^^^^^^^^^^^^^^^^
File "/home/kissoul/miniconda3/envs/lf/lib/python3.11/site-packages/fire/core.py", line 693, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/kissoul/WORKDIR/LLaMA-Factory/scripts/pissa_init.py", line 53, in quantize_pissa
target_modules=[name.strip() for name in lora_target.split(",")],
^^^^^^^^^^^^^^^^^
AttributeError: 'tuple' object has no attribute 'split'
打印出来确实传递成元组了
lora_target value before function call: ('k_proj', 'o_proj', 'q_proj', 'v_proj', 'down_proj', 'gate_proj', 'up_proj'), type: <class 'tuple'>
Reminder
System Info
llamafactory
version: 0.8.3.dev0Reproduction
打印出来确实传递成元组了 lora_target value before function call: ('k_proj', 'o_proj', 'q_proj', 'v_proj', 'down_proj', 'gate_proj', 'up_proj'), type: <class 'tuple'>
Expected behavior
No response
Others
No response