dustinblackman / oatmeal

Terminal UI to chat with large language models (LLM) using different model backends, and integrations with your favourite editors!
https://dustinblackman.com/posts/oatmeal/
MIT License
477 stars 23 forks source link

Panic when using openai #39

Closed 4ydx closed 6 months ago

4ydx commented 7 months ago

I thought to try to submit a PR, but you have a git hook with cargo bin, which I'm not familiar with. Anyway, here is a - probably incorrect - first attempt:

diff --git a/src/domain/services/bubble.rs b/src/domain/services/bubble.rs
index 9af582b..bc5f3e9 100644
--- a/src/domain/services/bubble.rs
+++ b/src/domain/services/bubble.rs
@@ -218,7 +218,7 @@ impl<'a> Bubble<'_> {
                 return line.len();
             })
             .max()
-            .unwrap();
+            .unwrap_or(0);

         if max_line_length > (self.window_max_width - line_border_width) {
             max_line_length = self.window_max_width - line_border_width;

After the change above is applied and compiled, I am seeing that "loading..." never goes away after chat gpt-4 completes sending data. So there is something incompatible in the stream logic perhaps?

dustinblackman commented 7 months ago

Hey there! Thanks the report.

This sounds more like a bug in reading in the OpenAI responses rather than how bubbles are rendered. I'll take a look!

4ydx commented 7 months ago

Agreed. I simply wanted to stop the panic so that I could even begin to think about what might actually be going on.

Is it correct to assume that here you expect the loop to exit once the request is fully processed so that done will be marked as true? Am I reading that correctly?

jim-at-jibba commented 7 months ago

I assume I am getting the same error when using openai. Ollama models all work fine

dustinblackman commented 6 months ago

Sorry for the delay, fixed in v0.12.4. Thanks!