Closed Jerry23011 closed 4 weeks ago
Actually, we can easily enable LLMStreamService to change models quickly cc781170
Actually, we can easily enable LLMStreamService to change models quickly cc78117
That's nice
Speaking of built-in ai, I noticed it's using gemini pro 1.5 flash latest. Do we use latest for gemini too? Or do we stick with the stable build?
Currently Gemini has only one translationPrompt
, which can only be used for translation.
Please refer to BaseOpenAIService
to add prompt for querying sentences and words.
Speaking of built-in ai, I noticed it's using gemini pro 1.5 flash latest. Do we use latest for gemini too? Or do we stick with the stable build?
Just keep it this way. I noticed that gemini-1.5-flash
is also used in the official documentation.
I use gemini-1.5-flash-latest
in the built-in AI service because the gemini-1.5-flash
API from Google was not yet available before, and using it directly would result in an error 😓
It's generally working but sometimes it doens't 100% comform to the format in the prompt in word translate
Although using a single prompt seems to work for now, I recommend referring to the official docs and switching to using ModelContent
to record few-shot examples. Note that the roles are "user" and "model," which is different from OpenAI.
Additionally, Gemini also supports systemInstruction
https://github.com/google-gemini/generative-ai-swift/pull/152 , please add that as well.
I'm confused at the moment. If we are using history, what are we inputing into the systemInstructions?
Also, Chat
doesn't seem to support streaming? If we were to initialize model
with startChat(history: ...)
, chat
will become a Chat
instead of GenerativeModel
, and Chat
doesn't support streaming.
So I'm thinking if we can just use systemInstructions for the few shots as it accepts a list of
Turns out systemInstructions only accepts ModelContent
String
No, the parameter is actually quite flexible, here is a simple example:
let model = GenerativeModel(
name: model,
apiKey: apiKey,
safetySettings: [
harassmentBlockNone,
hateSpeechBlockNone,
sexuallyExplicitBlockNone,
dangerousContentBlockNone,
],
+ systemInstruction: LLMStreamService.translationSystemPrompt
)
var resultString = ""
// Gemini Docs: https://github.com/google/generative-ai-swift
let chatHistory = [
ModelContent(role: "user", parts: "Hello, I have 2 dogs in my house."),
ModelContent(role: "model", parts: "Great to meet you. What would you like to know?"),
]
+ let outputContentStream = model.generateContentStream(chatHistory)
Now it works pretty well
The streaming feature is broken. Let me take another look
I'm pretty sure it has something to do with handleResult()
, but I don't know how to fix it yet.
It should be working now.
I'll take a look at these
I have improved the code, replace all AI const stored keys with dynamic variables a6dcd5d3 .
This code can be optimized later, as well as removing unused const keys.
So we need to replace all the strings 😂
Note that gemini-1.0-pro model does not support system instruction https://github.com/google-gemini/generative-ai-python/issues/328
So I change system prompt to user prompt for gemini-1.0-pro model, see in 22bf492d and 67a63f15 .
Note that gemini-1.0-pro model does not support system instruction google-gemini/generative-ai-python#328
So I change system prompt to user prompt for gemini-1.0-pro model, see in 22bf492 and 67a63f1 .
Got it, thank you :)
@Jerry23011 I haven't looked closely at the Gemini API, does it support canceling stream requests? Like this https://github.com/tisfeng/Easydict/pull/577
@Jerry23011 I haven't looked closely at the Gemini API, does it support canceling stream requests? Like this #577
I didn't find any, so I submitted an issue https://github.com/google-gemini/generative-ai-swift/issues/178
Good, their developers are very proactive, and next, we just need to wait for them to implement this feature 😃
closes #559
The settings pane is now finished and working.
I set the default model as gemini-1.5-flash as it has the same quota with 1.0 and offer better performance.
However, I don't know how to realize
query type
and display available models in the query view