redevrx / chat_gpt_sdk

Flutter ChatGPT
https://pub.dev/packages/chat_gpt_sdk
MIT License
319 stars 163 forks source link

About data from database #40

Closed dinhan92 closed 7 months ago

dinhan92 commented 1 year ago

How do I make the chat to answer with the data I provide? if it can not answer, the user will contact with a real specialist

redevrx commented 1 year ago

@dinhan92 Can you please provide me with an example?

dinhan92 commented 1 year ago

Here is my Python API, I want to use it for flutter app and want the response go word by word like chat gpt but having no clues =.=

from langchain.chat_models import ChatOpenAI
import gradio as gr
import sys
import os
from pymongo import MongoClient
from llama_index.retrievers import VectorIndexRetriever
from llama_index.query_engine import RetrieverQueryEngine
from llama_index.indices.postprocessor import SimilarityPostprocessor
from llama_index.llm_predictor.chatgpt import ChatGPTLLMPredictor
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
from langchain.prompts.chat import (
    ChatPromptTemplate,
    SystemMessagePromptTemplate,
    HumanMessagePromptTemplate,
)

os.environ["OPENAI_API_KEY"] = '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'

max_input_size = 4096
num_outputs = 100
max_chunk_overlap = 20
chunk_size_limit = 600

prompt_helper = PromptHelper(max_input_size, num_outputs, max_chunk_overlap, chunk_size_limit=chunk_size_limit)
prepend_messages = [
    SystemMessagePromptTemplate.from_template("You are a helpful assistant that uses Vietnamese in every response")
]
llm_predictor = ChatGPTLLMPredictor(prepend_messages = prepend_messages)
embed_model = LangchainEmbedding(HuggingFaceEmbeddings())

service_context = ServiceContext.from_defaults(
        llm_predictor = llm_predictor,
        # embed_model=embed_model, 
        prompt_helper = prompt_helper)

def construct_index(directory_path):
    documents = SimpleDirectoryReader(directory_path).load_data()
    index = GPTVectorStoreIndex.from_documents(documents, service_context=service_context)

    index.storage_context.persist(persist_dir='./storage')

    return index

def chatbot(input_text):
    # rebuild storage context
    storage_context = StorageContext.from_defaults(persist_dir='./storage')

    index = load_index_from_storage(storage_context = storage_context,
                                    service_context = service_context)

    query_engine = index.as_query_engine(
        retriever = "embedding",
    )

    response = query_engine.query(input_text)
    return response.response

iface = gr.Interface(fn=chatbot,
                     inputs=gr.components.Textbox(lines=7, label="Enter your text"),
                     outputs="text",
                     title="Custom-trained AI Chatbot")

index = construct_index("docs")
iface.launch(share=True)
redevrx commented 1 year ago

you can using mongodb save chatGPT response in your backend api.

dinhan92 commented 1 year ago

but how to use that response with this package? For example, with this code : void chatCompleteWithSSE() { final request = ChatCompleteText(messages: [ Map.of({"role": "user", "content": 'Hello!'}) ], maxToken: 200, model: ChatModel.gpt_4);

openAI.onChatCompletionSSE(request: request).listen((it) { debugPrint(it.choices.last.message?.content); }); }

should I use like this? void chatCompleteWithSSE() { final request = ChatCompleteText(messages: [ Map.of({"role": "user", "content": 'Hello!'}) ], maxToken: 200, model: ChatModel.gpt_4);

var response = getResponseFromAPI(); openAI.onChatCompletionSSE(request: request).listen((it) { debugPrint(response ); }); }

redevrx commented 1 year ago

@dinhan92 GPT4 is Limit Access, you can permission access it.

dinhan92 commented 1 year ago

How about, using GPT 3.5 here? How to use my customize data? void completeWithSSE() { final request = CompleteText( prompt: "Hello world", maxTokens: 200, model: Model.textDavinci3); openAI.onCompletionSSE(request: request).listen((it) { debugPrint(it.choices.last.text); }); }

redevrx commented 1 year ago

@dinhan92

 void chatCompleteWithSSE() {
  ///condition compare pormpt
 /// map prompt to backend api
/// not map prompt

  final request = ChatCompleteText(messages: [
    Map.of({"role": "user", "content": 'Hello!'})
  ], maxToken: 200, model: ChatModel.chatGptTurbo);

  openAI.onChatCompletionSSE(request: request).listen((it) {
    debugPrint(it.choices.last.message?.content);
  /// handle save to backend database here
  });
}