redevrx / chat_gpt_sdk

Flutter ChatGPT
https://pub.dev/packages/chat_gpt_sdk
MIT License
329 stars 170 forks source link

Using own openAI assistants #87

Closed andywetherell closed 10 months ago

andywetherell commented 10 months ago

Is it possible to use your own custom assistants using this package or are you meant to do that tuning via the fineTune function?

andywetherell commented 10 months ago

Reading more into the documentation, I assume assistants might turn up in the list of models, but the example code throws errors from the factory OpenAiModel.fromJson ( final models = await openAI.listModel(); )

redevrx commented 10 months ago

@andywetherell thank you for bringing up the issue. I believe the problem stems from decoding JSON in the factory.

redevrx commented 10 months ago

If you want to use a custom model, just input the name of your model, as an example.

final mModel = ChatModelFromValue("your custom model");
andywetherell commented 10 months ago

Awesome, thanks @redevrx

andywetherell commented 10 months ago

Hey @redevrx I did try using the ChatModelFromValue. We put in our assistant's Name and then tried the Id, neither of which worked.

It got thrown that the model doesn't exist, are you aware of any issues with connecting with custom models? I'll read into the docs tomorrow to understand more how it works :)

leszekkrol commented 10 months ago

@andywetherell @redevrx I have a similar issue, were you able to resolve it? How can I provide my assistant's ID?

andywetherell commented 10 months ago

@leszekkrol I don't think the package has support for it, assistants require different requests to the generic models. I ended up abandoning the package and wrote a service. Copied below if helpful, I know nothing about http or working with Json data as new to coding so this is not the best practice approach, but it does work :)

To call it, you need to first create the thread (on initState makes most sense if generic chat), then you create a message for that thread, then execute the thread, and use listThreadMessage to retrieve the response from your assistant. You'll need to add your API key and assistant ID

My reference was this btw, I was going through turning each step into Dart code basically: https://platform.openai.com/docs/assistants/overview

`Future createThread() async { try { final response = await http.post( Uri.parse('https://api.openai.com/v1/threads'), headers: { 'Authorization': 'Bearer INSERT-YOUR-API-KEY', 'OpenAI-Beta': 'assistants=v1', }, );

  if (response.statusCode == 200) {
    var responseJson = jsonDecode(response.body);
    String threadId = responseJson['id'];  // Extracting the thread ID
    return threadId;  // Returning the thread ID
  } else {
    return 'Error: ${response.statusCode}';
  }
} catch (e) {
  return 'Exception: $e';
}

}

Future createThreadMessage(String threadId, String messageContent) async { try { final response = await http.post( Uri.parse('https://api.openai.com/v1/threads/$threadId/messages'), headers: { 'Content-Type': 'application/json', 'Authorization': 'Bearer INSERT-YOUR-API-KEY', 'OpenAI-Beta': 'assistants=v1', }, body: jsonEncode({ 'role': 'user', 'content': messageContent, }), );

  if (response.statusCode == 200) {
    developer.log('Message sent');
  } else {
    developer.log('Error: ${response.statusCode}');
  }
} catch (e) {
  developer.log('Exception: $e');
}

}

Future executeThreadRun(String threadId) async { try { final response = await http.post( Uri.parse('https://api.openai.com/v1/threads/$threadId/runs'), headers: { 'Content-Type': 'application/json', 'Authorization': 'Bearer INSERT-YOUR-API-KEY', 'OpenAI-Beta': 'assistants=v1', }, body: jsonEncode({ 'assistant_id' : 'INSERT-YOUR-ASSISTANT-ID', }), );

  if (response.statusCode == 200) {
    developer.log(response.body);
  } else {
    developer.log('Error: ${response.statusCode}');
  }
} catch (e) {
  developer.log('Exception: $e');
}

}

// Checking run status Future retrieveRun(String threadId, String runId) async { try { final response = await http.get( Uri.parse('https://api.openai.com/v1/threads/$threadId/runs/$runId'), headers: { 'Content-Type': 'application/json', 'Authorization': 'Bearer INSERT-YOUR-API-KEY' }, );

  if (response.statusCode == 200) {
    // return response.body;
  } else {
    // return 'Error: ${response.statusCode}';
  }
} catch (e) {
  // return 'Exception: $e';
}

}

Future listThreadMessages(String threadId) async { try { final response = await http.get( Uri.parse('https://api.openai.com/v1/threads/$threadId/messages'), headers: { 'Content-Type': 'application/json', 'Authorization': 'Bearer INSERT-YOUR-API-KEY', 'OpenAI-Beta': 'assistants=v1', }, );

if (response.statusCode == 200) {
  var data = jsonDecode(response.body)['data'];
  developer.log(messages.toString());
  String latestResponse = ""; 

  // Iterate in reverse to find the latest assistant message
  for (var message in data.reversed) {
    if (message['role'] == 'assistant') {
      latestResponse = message['content'][0]['text']['value'];
      break;
    }
  }

  developer.log(latestResponse);
  return latestResponse;
} else {
  developer.log('Error: ${response.statusCode}');
  return 'Error: ${response.statusCode}';
}

} catch (e) { return 'Exception: $e'; } }`

redevrx commented 10 months ago

@leszekkrol @andywetherell

this package yet not support Assistants

biodegradable000 commented 7 months ago

Hey @redevrx is there an update to this? It looks like assistants are supported now, but I struggle to get it to work.

I am using this example function that you provided: openAI.assistant.create(assistant: assistant)

and then this to call the request

final request = ChatCompleteText(messages: [
      Map.of({"role": "user", "content": question})
    ], maxToken: 200, model: Gpt4ChatModel());
    response = await openAI.onChatCompletion(request: request);

But the answer is always very generic and doesn't fit my assistant. It seems like the assistant is completely ignored. Any hint would be much appreciated.

redevrx commented 7 months ago

Hey @redevrx is there an update to this? It looks like assistants are supported now, but I struggle to get it to work.

I am using this example function that you provided: openAI.assistant.create(assistant: assistant)

and then this to call the request

final request = ChatCompleteText(messages: [
      Map.of({"role": "user", "content": question})
    ], maxToken: 200, model: Gpt4ChatModel());
    response = await openAI.onChatCompletion(request: request);

But the answer is always very generic and doesn't fit my assistant. It seems like the assistant is completely ignored. Any hint would be much appreciated.

Hi @biodegradable000 you can try with this step

Ref

biodegradable000 commented 7 months ago

Hi @redevrx , thanks a lot for the instructions. I'm starting to figure things out now. Got my assistant, my thread and my Run all working. I am encountering one (last?) problem though: The "status" that I get back from the Run always stays "queued" and never changes to i.e. "completed". Therefore, I don't know how long to wait until fetching the new messages from the thread. If I do it right after "awaiting" the run result it's too early. I can delay the code for an arbitrary 5 seconds or so before fetching the thread messages, but that seems dumb. How do I get notified when the latest answer has arrived if not via the "status" property of the run? Thanks a million, by the way for this useful package! :)

redevrx commented 7 months ago

@biodegradable000 You Can Look this docs https://platform.openai.com/docs/assistants/how-it-works/runs-and-run-steps