Open antmikinka opened 2 months ago
Are you trying to lower the model to CoreML by passing --coreml
? We're still actively working on enabling llama2 7b with CoreML. The xnnpack backend is ready for llama2 7b model.
Are you trying to lower the model to CoreML by passing
--coreml
? We're still actively working on enabling llama2 7b with CoreML. The xnnpack backend is ready for llama2 7b model.
@cccclai ah ok sweet thank you for letting me know!! I would have still been trying haha
Is the XNNPACK a .mlpackage? I have to build the xnnpack stuff, I just did mps and coreml.
Do you have more info on what models you have ready for CoreML?
Is there any .mlmodel/.mlpackage model configs (or any end products of converting) for in executorch?
xnnpack (https://github.com/google/XNNPACK) is a software library with a list highly optimized operators in CPU. It can work on iOS too.
Regarding CoreML questions, I'd defer to @cymbalrush and @YifanShenSZ to answer.
Will also cc: @shoumikhin for iOS/MacOS related inquiries.
Hey @antmikinka, would this simpler export work for you?
python -m examples.models.llama2.export_llama --checkpoint /Users/anthonymikinka/executorch/llama-2-7b-chat/consolidated.00.pth --params /Users/anthonymikinka/executorch/llama-2-7b-chat/params.json -kv --coreml
Concretely, this is a good start point that we have tested and made sure working. For all other arguments, could you please try to add them one by one until issue pops up? (so we can have more clarity on what went wrong)
@YifanShenSZ I kept running into Disk Memory Issues on my Macbook Pro Even upto 30GB free space.
I added the
--group_size 128 -qmode 8da4w -d fp32 --verbose --max_seq_length 512 -o "/Volumes/NVME 3/ExecuTorch Models"
I got the error above once again.
I started to work on the arguments.
I did -kv --coreml --group_size 128 -d fp32 --verbose --max_seq_length 512 -o "/Volumes/NVME 3/ExecuTorch Models"
ran out of space once again.
I did -kv --coreml -qmode 8da4w -d fp32 --verbose -o "/Volumes/NVME 3/ExecuTorch Models"
ran into the quantized issue. I am thinking that the -qmode 8da4w
argument may be the issue.
To possibly help narrow this down, I took the last couple hundred lines of my terminal and created a gist.
PyTorch-executorch-issue 3443 terminal.txt
I did -kv --coreml -qmode 8da4w
I made a log file for this one. Here is the gist executorch.log
I got the quantized error as well. Looks like I was right about the -qmode 8da4w
Just tried -kv --coreml --verbose --group_size 128 --max_seq_length 128
ran out of stroage, used 29gb trying to convert.
I may try again later today after trying to free up some more storage. let me know if that log file has helped.
My Script I ran to cause this error
python -m examples.models.llama2.export_llama --checkpoint /Users/anthonymikinka/executorch/llama-2-7b-chat/consolidated.00.pth --params /Users/anthonymikinka/executorch/llama-2-7b-chat/params.json -kv --use_sdpa_with_kv_cache --coreml --group_size 128 -qmode 8da4w -d fp32 --verbose --max_seq_length 512 -o "/Volumes/NVME 3/ExecuTorch Models"
Above this is a lot of this EdgeOpOverload, but otherwise MIL backend and default pipelines built. Lots of ops were removed earlier on before the MIL building. Below is some terminal code and the traceback error.