TheoKanning / openai-java

OpenAI Api Client in Java
MIT License
4.75k stars 1.19k forks source link

How to return this workflow to the front-end rendering? #250

Closed xia2206636330 closed 11 months ago

xia2206636330 commented 1 year ago

使用java调用接口后返回的工作流,如何返回前端工程师实时渲染呢? Flowable chatCompletionChunkFlowable = openAiService.streamChatCompletion(chatCompletionRequest).doOnError(Throwable::printStackTrace);

cryptoapebot commented 1 year ago

Not enough information. Did you register a ResponseBody callback?

There examples here and other places.

https://github.com/TheoKanning/openai-java/blob/4d5e496f8b18857167434cccf3c531baa1c5f2af/service/src/test/java/com/theokanning/openai/service/ResponseBodyCallbackTest.java

https://stackoverflow.com/questions/33228126/how-can-i-handle-empty-response-body-with-retrofit-2

Good luck!

xia2206636330 commented 1 year ago
/**
 * 流式聊天
 */
@GetMapping("/streamChatCompletion")
@ResponseBody
public Flowable<ChatCompletionChunk> StreamChatCompletion(String content)
{
   return chatService.streamChatCompletion(content);
}

public Flowable<ChatCompletionChunk> streamChatCompletion(String content) {
    //流式对话
    OpenAiService openAiService = this.getOpenAiExecutorService();
    List<ChatMessage> chatMessageList = new ArrayList<>();
    ChatMessage chatMessage = new ChatMessage();
    chatMessage.setRole("user");
    chatMessage.setContent(content);
    chatMessageList.add(chatMessage);
    ChatCompletionRequest chatCompletionRequest = ChatCompletionRequest.builder()
            .model(Model.GPT_3_5_TURBO.getName())
            .messages(chatMessageList)
            .temperature(0.7)
            .maxTokens(1024)
            .topP(1.0)
            .stream(true)
            .frequencyPenalty(0.0)
            .presencePenalty(0.0)
            .build();
    Flowable<ChatCompletionChunk> chatCompletionChunkFlowable = openAiService.streamChatCompletion(chatCompletionRequest).doOnError(Throwable::printStackTrace);
    return chatCompletionChunkFlowable;
}

我想以流的方式响应给前端,让前端实时获取到结果,就类似与socket一样

cryptoapebot commented 1 year ago
    /**
     * Calls the Open AI api and returns a Flowable of SSE for streaming
     * omitting the last message.
     *
     * @param apiCall The api call
     */
    public static Flowable<SSE> stream(Call<ResponseBody> apiCall) {
        return stream(apiCall, false);
    }

    /**
     * Calls the Open AI api and returns a Flowable of SSE for streaming.
     *
     * @param apiCall  The api call
     * @param emitDone If true the last message ([DONE]) is emitted
     */
    public static Flowable<SSE> stream(Call<ResponseBody> apiCall, boolean emitDone) {
        return Flowable.create(emitter -> apiCall.enqueue(new ResponseBodyCallback(emitter, emitDone)), BackpressureStrategy.BUFFER);
    }

    /**
     * Calls the Open AI api and returns a Flowable of type T for streaming
     * omitting the last message.
     *
     * @param apiCall The api call
     * @param cl      Class of type T to return
     */
    public static <T> Flowable<T> stream(Call<ResponseBody> apiCall, Class<T> cl) {
        return stream(apiCall).map(sse -> mapper.readValue(sse.getData(), cl));
    }
ASSDOMINATE commented 1 year ago

Controller 返回给前端一个 SseEmitter,后端把这 SseEmitter通过异步的方式往里面塞数据 openAiService.streamChatCompletion(request).doOnError(Throwable::printStackTrace).blockingForEach((result) -> { SseEmitter.send(result); })

wangran99 commented 1 year ago

Controller 返回给前端一个 SseEmitter,后端把这 SseEmitter通过异步的方式往里面塞数据 openAiService.streamChatCompletion(request).doOnError(Throwable::printStackTrace).blockingForEach((result) -> { SseEmitter.send(result); }) 确定是用blockingForEach吗?

ASSDOMINATE commented 1 year ago

Controller 返回给前端一个 SseEmitter,后端把这 SseEmitter通过异步的方式往里面塞数据 openAiService.streamChatCompletion(request).doOnError(Throwable::printStackTrace).blockingForEach((result) -> { SseEmitter.send(result); }) 确定是用blockingForEach吗? 确定,这样才能实现流式返回

wangran99 commented 1 year ago

这样可以:

// private SseEmitter emitter;

// @GetMapping("chat") // public SseEmitter chatgpt(String prompt) { // // 消息列表 // List list = new ArrayList<>(); // // // 给chatGPT定义一个身份,是一个助手 // ChatMessage chatMessage = new ChatMessage(); // chatMessage.setRole(ChatMessageRole.SYSTEM.value()); // chatMessage.setContent("你是一个大学老师."); // list.add(chatMessage); // // // 定义一个用户身份,content是用户写的内容 // ChatMessage userMessage = new ChatMessage(); // userMessage.setRole("user"); // userMessage.setContent("编写一个贪吃蛇游戏,并写清楚步骤"); // list.add(userMessage); // // ChatCompletionRequest request = ChatCompletionRequest.builder() // .messages(list) // .stream(true) // .model("gpt-3.5-turbo") // .build(); // // chatCompletion 对象就是chatGPT响应的数据了 //// ChatCompletionResult chatCompletion = openAiService.createChatCompletion(request); //// log.info(chatCompletion.toString()); // emitter = new SseEmitter(60 * 1000L); // openAiService.streamChatCompletion(request) //// .doOnError(e -> { //// log.error("OPEN_AI_CHAT_STREAM_ERROR", e); //// }) //// .blockingForEach(System.out::println); //// .blockingForEach((result) -> { //// emitter.send(result); //// }); //// .doOnSubscribe(e -> {log.error("subscribe error.",e);}) //// .subscribe(e->log.info("this is subscribe:",e)); //// .doOnEach(e->log.info("hello"+e.toString())).subscribe(); // .subscribe(chat ->{log.info("chat receive:"+chat.toString()); // emitter.send(chat);}, // throwable -> {log.error("throw error",throwable);}, // () -> {log.info("chat complete");}); // // return emitter; // }

Controller 返回给前端一个 SseEmitter,后端把这 SseEmitter通过异步的方式往里面塞数据 openAiService.streamChatCompletion(request).doOnError(Throwable::printStackTrace).blockingForEach((result) -> { SseEmitter.send(result); }) 确定是用blockingForEach吗? 确定,这样才能实现流式返回

a1164714 commented 9 months ago

这个能返回SpringMVC的Flux么?