Open iseeyuan opened 6 months ago
Exciting. Can we try this in torchchat?
I was also looking at how to run a model on watchOS because -- we have iOS and macOS with ET now, so looking for the next exciting platform. And then... Max size: 75 MB ?!
This might be an interesting experiment. It may also be an exciting item for a community member to prototype!
@iseeyuan
in addition to Phi3, here is a list of some of the most popular tiny llms used within the open-source community:
Tinyllama should work out of the box since the same model architecture and the same tokenizer. TinyLlama is of particular interest because almost every mobile phone can run it
I am working on getting Phi-3 enabled via ExecuTorch. Will post here once I finish it. After that, we can try it on TorchChat as well.
@salykova Thank you for the list! We may picked Phi3 as it's relatively new and popular, but are definitely considering enabling other models. The long term goal is that we improve our infrastructure during the first couple of models, and the community can contribute to enabling other models using the infrastructure. With that said, you are welcome to enable other models and feel free to submit your PR!
Would love to see 27tokens/s on mobile with Executorch and phi silica https://learn.microsoft.com/en-us/windows/ai/apis/phi-silica
We are able to export and run phi-3-mini on ExecuTorch. Please follow the readme page here.
With limited memory on most of phones, there's community requests on supporting a model with a smaller size like Phi-3 mini. It may be supported out of box, but need to verification, evaluation and profiling.