Closed zhao-lun closed 2 days ago
I suspect it will work if you use a LLM Foundry commit from before we removed prefix LM from the code in LLM Foundry (https://github.com/mosaicml/llm-foundry/pull/1065). Could you give that a try?
More generally, I recommend adding the hf_checkpointer
to your callbacks so that hf checkpoints are produced during training instead of trying to convert after the fact.
@dakinggg Thanks a lot!
adding the following section allow adapter weight generation.
callbacks:
hf_checkpointer:
save_folder: ./{run_name}/checkpoints
save_interval: "1ep"
$ls
README.md adapter_config.json adapter_model.safetensors special_tokens_map.json tokenizer.json tokenizer_config.json
env
example config with lora
fine tuning
after fine-tuning
Inference
Attempt to convert weight to hf/peft compactible
appreciate any advice/pointers to convert it
Expectation
able to do inference with adapters/merged weights