Open brando90 opened 6 days ago
@danielhanchen ?
Sorry will have a look at this! Apologies on the delay!
Sorry will have a look at this! Apologies on the delay!
@danielhanchen I tried the config file name change you suggested here:
https://github.com/unslothai/unsloth/issues/421
but it doesn't work. What do you suggest I do?
I thought the config file was the issue so I followed up with most of the details here: https://github.com/unslothai/unsloth/issues/421 @danielhanchen let me know if I may be of further help.
Wait @brando90 does model.save_pretrained_merged(model_save_name, tokenizer, save_method="merged_16bit")
not work for vLLM?
Not for me. It doesn't work.
On Tue, Oct 1, 2024, 1:01 AM Daniel Han @.***> wrote:
Wait @brando90 https://github.com/brando90 does model.save_pretrained_merged(model_save_name, tokenizer, save_method="merged_16bit") not work for vLLM?
— Reply to this email directly, view it on GitHub https://github.com/unslothai/unsloth/issues/1063#issuecomment-2385062542, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAOE6LR3BCC65JJNS2L5VQ3ZZJJFJAVCNFSM6AAAAABO5YBCQGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGOBVGA3DENJUGI . You are receiving this because you were mentioned.Message ID: @.***>
If I recall correctly the main issue is the config.json file hf needs is not the same the unsloth one uses
On Tue, Oct 1, 2024, 10:25 AM Brando Miranda @.***> wrote:
Not for me. It doesn't work.
On Tue, Oct 1, 2024, 1:01 AM Daniel Han @.***> wrote:
Wait @brando90 https://github.com/brando90 does model.save_pretrained_merged(model_save_name, tokenizer, save_method="merged_16bit") not work for vLLM?
— Reply to this email directly, view it on GitHub https://github.com/unslothai/unsloth/issues/1063#issuecomment-2385062542, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAOE6LR3BCC65JJNS2L5VQ3ZZJJFJAVCNFSM6AAAAABO5YBCQGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGOBVGA3DENJUGI . You are receiving this because you were mentioned.Message ID: @.***>
@brando90 Wait so you're saying all vLLM merges cannot be loaded up in vLLM?
Brando Miranda Ph.D. Student Computer Science, Stanford University EDGE Scholar, Stanford University @.*** website: https://brando90.github.io/brandomiranda/home.html
On Oct 1, 2024, at 6:33 PM, Daniel Han @.***> wrote:
@brando90https://github.com/brando90 Wait so you're saying all vLLM merges cannot be loaded up in vLLM?
— Reply to this email directly, view it on GitHubhttps://github.com/unslothai/unsloth/issues/1063#issuecomment-2387463945, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AAOE6LUD2NQVSQITNY73DL3ZZNEM3AVCNFSM6AAAAABO5YBCQGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGOBXGQ3DGOJUGU. You are receiving this because you were mentioned.Message ID: @.***>
Wait @brando90 does
model.save_pretrained_merged(model_save_name, tokenizer, save_method="merged_16bit")
not work for vLLM?
obvious sanity check. Have you tested it? e.g., with Qwen/Qwen2-1.5B
?
if you are open to just using the lora one without merging do this:
https://github.com/unslothai/unsloth/issues/1039
note: if you are allowed to push to hf repo that should work too.
Current attempt:
bug
how to fix? maybe if I save the model in a HF compatible way?
current save code: