Open antmikinka opened 6 months ago
@Jack-Khuu is the on-device evaluation ready?
edit: Acutally coreml should be able to run on Mac too, @antmikinka are you looking for on device evaluation, or just evaluate the coreml model either on Mac or iPone?
@cccclai
Yes, I'm trying to see an evaluation for the model on the Mac. I would like to put the model on my iPhone (iPhone 13 Pro) as well.
I was trying to determine what hardware (cpu/gpu/ane) was being utilized to compute the model.
Could not import fairseq2 modules.
Seems an issue with the executorch setup.
@Jack-Khuu is the on-device evaluation ready?
Eval is ready, but this error doesn't seem to be related to eval. It fails during load_llama_model, prior to eval I'll try to narrow it down and loop in core
I think it's related to how we expect eval to work with delegated model, in this case coreml
Just as an update so this doesn't go stale, investigating CoreML Eval is on our plate
Will update as things flesh out
I was following the llama2 7b guide, consenus not enough ram and other issues. tried the stories110M guide, worked all the way till I went to test it. I may remember lm_eval not being installed (its what my terminal said) not sure if that could be causing anything I am trying to eval model accuracy, and that is where this error is stemming from.
file I am using to save the .pte
script and terminal info