Closed xia2206636330 closed 1 year ago
Not enough information. Did you register a ResponseBody callback?
There examples here and other places.
https://stackoverflow.com/questions/33228126/how-can-i-handle-empty-response-body-with-retrofit-2
Good luck!
/**
* 流式聊天
*/
@GetMapping("/streamChatCompletion")
@ResponseBody
public Flowable<ChatCompletionChunk> StreamChatCompletion(String content)
{
return chatService.streamChatCompletion(content);
}
public Flowable<ChatCompletionChunk> streamChatCompletion(String content) {
//流式对话
OpenAiService openAiService = this.getOpenAiExecutorService();
List<ChatMessage> chatMessageList = new ArrayList<>();
ChatMessage chatMessage = new ChatMessage();
chatMessage.setRole("user");
chatMessage.setContent(content);
chatMessageList.add(chatMessage);
ChatCompletionRequest chatCompletionRequest = ChatCompletionRequest.builder()
.model(Model.GPT_3_5_TURBO.getName())
.messages(chatMessageList)
.temperature(0.7)
.maxTokens(1024)
.topP(1.0)
.stream(true)
.frequencyPenalty(0.0)
.presencePenalty(0.0)
.build();
Flowable<ChatCompletionChunk> chatCompletionChunkFlowable = openAiService.streamChatCompletion(chatCompletionRequest).doOnError(Throwable::printStackTrace);
return chatCompletionChunkFlowable;
}
我想以流的方式响应给前端,让前端实时获取到结果,就类似与socket一样
/**
* Calls the Open AI api and returns a Flowable of SSE for streaming
* omitting the last message.
*
* @param apiCall The api call
*/
public static Flowable<SSE> stream(Call<ResponseBody> apiCall) {
return stream(apiCall, false);
}
/**
* Calls the Open AI api and returns a Flowable of SSE for streaming.
*
* @param apiCall The api call
* @param emitDone If true the last message ([DONE]) is emitted
*/
public static Flowable<SSE> stream(Call<ResponseBody> apiCall, boolean emitDone) {
return Flowable.create(emitter -> apiCall.enqueue(new ResponseBodyCallback(emitter, emitDone)), BackpressureStrategy.BUFFER);
}
/**
* Calls the Open AI api and returns a Flowable of type T for streaming
* omitting the last message.
*
* @param apiCall The api call
* @param cl Class of type T to return
*/
public static <T> Flowable<T> stream(Call<ResponseBody> apiCall, Class<T> cl) {
return stream(apiCall).map(sse -> mapper.readValue(sse.getData(), cl));
}
Controller 返回给前端一个 SseEmitter,后端把这 SseEmitter通过异步的方式往里面塞数据 openAiService.streamChatCompletion(request).doOnError(Throwable::printStackTrace).blockingForEach((result) -> { SseEmitter.send(result); })
Controller 返回给前端一个 SseEmitter,后端把这 SseEmitter通过异步的方式往里面塞数据 openAiService.streamChatCompletion(request).doOnError(Throwable::printStackTrace).blockingForEach((result) -> { SseEmitter.send(result); }) 确定是用blockingForEach吗?
Controller 返回给前端一个 SseEmitter,后端把这 SseEmitter通过异步的方式往里面塞数据 openAiService.streamChatCompletion(request).doOnError(Throwable::printStackTrace).blockingForEach((result) -> { SseEmitter.send(result); }) 确定是用blockingForEach吗? 确定,这样才能实现流式返回
这样可以:
// private SseEmitter emitter;
// @GetMapping("chat")
// public SseEmitter chatgpt(String prompt) {
// // 消息列表
// List
Controller 返回给前端一个 SseEmitter,后端把这 SseEmitter通过异步的方式往里面塞数据 openAiService.streamChatCompletion(request).doOnError(Throwable::printStackTrace).blockingForEach((result) -> { SseEmitter.send(result); }) 确定是用blockingForEach吗? 确定,这样才能实现流式返回
这个能返回SpringMVC的Flux么?
使用java调用接口后返回的工作流,如何返回前端工程师实时渲染呢? Flowable chatCompletionChunkFlowable = openAiService.streamChatCompletion(chatCompletionRequest).doOnError(Throwable::printStackTrace);