Closed antmikinka closed 1 week ago
The log seems expected - is there any log that looks confusing?
@cccclai The only thing that was confusing was it stating the "Required memory for activation in bytes: [0, 19002368]" I wasn't sure if the ./coreml_llama2.pte file was complete or not.
Oh that was completed - "Required memory for activation in bytes: [0, 19002368]" means that, in addition to the model's weight, we need 19002368 extra memory for the activation when we run the model on device.
@antmikinka Is the issue resolved? If not, can you please summarize what else is needed? Thanks.
I think @antmikinka was able to finish exporting, if not please file another issue. Closing.
I was following through on the llama pages for this repo. I do have a 8GB macbook, so I do not know if this is the issue. My ram did not skyrocket and it never said "ran out of ram". So, I don't think its a ram issue.
script to reproduce:
python -m examples.models.llama2.export_llama -kv --coreml -c stories110M.pt -p params.json
yes I ran and built coreml frameworks and dependencies on 2.0 rc5