This is an implementation of @andreban's use case idea.
Input a product review, and get an on-device LLM-generated suggestion on how to make your review more helpful (or a thumbsup, if it was already helpful). This demo uses MediaPipe+Gemma1. Like our other Gemma on-device demo, this demo offloads AI inference to a worker for better perf.
Some fixes still TBD, and the output is not 100% reliable (Gen AI). But this is a start.
Refactoring/Cleanup still TODO in some places.
See README and demo screencast for the expected behavior and prerequisites.
@andreban Possible follow-ups for this demo:
Tweak this and add a server-side impl, as a hybrid AI proof-of-concept.
Upgrade to Gemma 2 once available through MediaPipe LLM Inference API. My Gemma 1 prompt took a lot of iteration, but is still verbose and brittle.
This is an implementation of @andreban's use case idea.
Input a product review, and get an on-device LLM-generated suggestion on how to make your review more helpful (or a thumbsup, if it was already helpful). This demo uses MediaPipe+Gemma1. Like our other Gemma on-device demo, this demo offloads AI inference to a worker for better perf.
Some fixes still TBD, and the output is not 100% reliable (Gen AI). But this is a start. Refactoring/Cleanup still TODO in some places.
See README and demo screencast for the expected behavior and prerequisites.
@andreban Possible follow-ups for this demo: