Open omkar806 opened 7 months ago
Hi @omkar806,
At present, this feature is unavailable. Should you wish to incorporate it, we kindly ask you to submit a feature request outlining the potential benefits for the community upon its implementation. Subsequently, we will review and forward your request to the appropriate team. Based on the ensuing discussion and demand, we can consider implementing it in the near future.
Thank you!!
okay I will add a feature request for this
We are working on making it easier to build more advanced use case on top of our LLM Inference. That being said, you can already tell the model to act like an agent by simply including this instruction in the prompt you are using. We will have more examples for this in the coming weeks.
That being said, you can already tell the model to act like an agent by simply including this instruction in the prompt you are using. We will have more examples for this in the coming weeks.
Hi: Has this guide been published?
Have I written custom code (as opposed to using a stock example script provided in MediaPipe)
None
OS Platform and Distribution
IOS
MediaPipe Tasks SDK version
No response
Task name (e.g. Image classification, Gesture recognition etc.)
LLM inference
Programming Language and version (e.g. C++, Python, Java)
SwiftUI
Describe the actual behavior
Currently we can load the gemma 2b on IOS and chat with it in general . But if we want to set some system prompt like You will act as this agent or a bot and your name is this , then How can user set this . As there is only an function to genrate response taking users query .
Describe the expected behaviour
Currently we can load the gemma 2b on IOS and chat with it in general . But if we want to set some system prompt like You will act as this agent or a bot and your name is this , then How can user set this .
Standalone code/steps you may have used to try to get what you need
Other info / Complete Logs
No response