Open nazrinharris opened 1 week ago
Or I could employ both. An on-device LLM for trying to get context and understanding (if it's even possible with a small model), and it'll decide if a larger LLM needs to be used, of which it'll connnect to a larger LLM API. Kinda like how Apple is supposedly doing.
Kind of want to implement an actually useful feature. As of now, what I want to do is a natural language processing into a transaction, so the user can just use the on-device annotation, speak their mind for the transaction, and the app will process it into an actual transaction