Closed andywetherell closed 10 months ago
Reading more into the documentation, I assume assistants might turn up in the list of models, but the example code throws errors from the factory OpenAiModel.fromJson ( final models = await openAI.listModel(); )
@andywetherell thank you for bringing up the issue. I believe the problem stems from decoding JSON in the factory.
If you want to use a custom model, just input the name of your model, as an example.
final mModel = ChatModelFromValue("your custom model");
Awesome, thanks @redevrx
Hey @redevrx I did try using the ChatModelFromValue. We put in our assistant's Name and then tried the Id, neither of which worked.
It got thrown that the model doesn't exist, are you aware of any issues with connecting with custom models? I'll read into the docs tomorrow to understand more how it works :)
@andywetherell @redevrx I have a similar issue, were you able to resolve it? How can I provide my assistant's ID?
@leszekkrol I don't think the package has support for it, assistants require different requests to the generic models. I ended up abandoning the package and wrote a service. Copied below if helpful, I know nothing about http or working with Json data as new to coding so this is not the best practice approach, but it does work :)
To call it, you need to first create the thread (on initState makes most sense if generic chat), then you create a message for that thread, then execute the thread, and use listThreadMessage to retrieve the response from your assistant. You'll need to add your API key and assistant ID
My reference was this btw, I was going through turning each step into Dart code basically: https://platform.openai.com/docs/assistants/overview
`Future
if (response.statusCode == 200) {
var responseJson = jsonDecode(response.body);
String threadId = responseJson['id']; // Extracting the thread ID
return threadId; // Returning the thread ID
} else {
return 'Error: ${response.statusCode}';
}
} catch (e) {
return 'Exception: $e';
}
}
Future
if (response.statusCode == 200) {
developer.log('Message sent');
} else {
developer.log('Error: ${response.statusCode}');
}
} catch (e) {
developer.log('Exception: $e');
}
}
Future
if (response.statusCode == 200) {
developer.log(response.body);
} else {
developer.log('Error: ${response.statusCode}');
}
} catch (e) {
developer.log('Exception: $e');
}
}
// Checking run status
Future
if (response.statusCode == 200) {
// return response.body;
} else {
// return 'Error: ${response.statusCode}';
}
} catch (e) {
// return 'Exception: $e';
}
}
Future
if (response.statusCode == 200) {
var data = jsonDecode(response.body)['data'];
developer.log(messages.toString());
String latestResponse = "";
// Iterate in reverse to find the latest assistant message
for (var message in data.reversed) {
if (message['role'] == 'assistant') {
latestResponse = message['content'][0]['text']['value'];
break;
}
}
developer.log(latestResponse);
return latestResponse;
} else {
developer.log('Error: ${response.statusCode}');
return 'Error: ${response.statusCode}';
}
} catch (e) { return 'Exception: $e'; } }`
@leszekkrol @andywetherell
this package yet not support Assistants
Hey @redevrx is there an update to this? It looks like assistants are supported now, but I struggle to get it to work.
I am using this example function that you provided:
openAI.assistant.create(assistant: assistant)
and then this to call the request
final request = ChatCompleteText(messages: [
Map.of({"role": "user", "content": question})
], maxToken: 200, model: Gpt4ChatModel());
response = await openAI.onChatCompletion(request: request);
But the answer is always very generic and doesn't fit my assistant. It seems like the assistant is completely ignored. Any hint would be much appreciated.
Hey @redevrx is there an update to this? It looks like assistants are supported now, but I struggle to get it to work.
I am using this example function that you provided:
openAI.assistant.create(assistant: assistant)
and then this to call the request
final request = ChatCompleteText(messages: [ Map.of({"role": "user", "content": question}) ], maxToken: 200, model: Gpt4ChatModel()); response = await openAI.onChatCompletion(request: request);
But the answer is always very generic and doesn't fit my assistant. It seems like the assistant is completely ignored. Any hint would be much appreciated.
Hi @biodegradable000 you can try with this step
Hi @redevrx , thanks a lot for the instructions. I'm starting to figure things out now. Got my assistant, my thread and my Run all working. I am encountering one (last?) problem though: The "status" that I get back from the Run always stays "queued" and never changes to i.e. "completed". Therefore, I don't know how long to wait until fetching the new messages from the thread. If I do it right after "awaiting" the run result it's too early. I can delay the code for an arbitrary 5 seconds or so before fetching the thread messages, but that seems dumb. How do I get notified when the latest answer has arrived if not via the "status" property of the run? Thanks a million, by the way for this useful package! :)
@biodegradable000 You Can Look this docs https://platform.openai.com/docs/assistants/how-it-works/runs-and-run-steps
Is it possible to use your own custom assistants using this package or are you meant to do that tuning via the fineTune function?