artmoskvin / autocoder

Coding agent prototype
0 stars 0 forks source link

Sweep: enable chat completion streaming #21

Open artmoskvin opened 7 months ago

artmoskvin commented 7 months ago

Currently, the AI class handles only batch completions. To improve UX it should also support streaming.

Checklist - [X] Modify `autocoder/ai.py` ✓ https://github.com/artmoskvin/autocoder/commit/3cefbbefdbf420c54d380fb23cc796886970c00e [Edit](https://github.com/artmoskvin/autocoder/edit/sweep/enable_chat_completion_streaming/autocoder/ai.py#L16-L21) - [X] Running GitHub Actions for `autocoder/ai.py` ✓ [Edit](https://github.com/artmoskvin/autocoder/edit/sweep/enable_chat_completion_streaming/autocoder/ai.py#L16-L21) - [X] Modify `autocoder/agent/plan.py` ✓ https://github.com/artmoskvin/autocoder/commit/a0c3ffddd52c7cfb90111567beb98cd6bcb96cd1 [Edit](https://github.com/artmoskvin/autocoder/edit/sweep/enable_chat_completion_streaming/autocoder/agent/plan.py#L35-L72) - [X] Running GitHub Actions for `autocoder/agent/plan.py` ✓ [Edit](https://github.com/artmoskvin/autocoder/edit/sweep/enable_chat_completion_streaming/autocoder/agent/plan.py#L35-L72)
sweep-ai[bot] commented 7 months ago

🚀 Here's the PR! #22

See Sweep's progress at the progress dashboard!
Sweep Basic Tier: I'm using GPT-4. You have 5 GPT-4 tickets left for the month and 3 for the day. (tracking ID: 143dff3bc8)

For more GPT-4 tickets, visit our payment portal. For a one week free trial, try Sweep Pro (unlimited GPT-4 tickets).
Install Sweep Configs: Pull Request

[!TIP] I can email you next time I complete a pull request if you set up your email here!


Actions (click)

GitHub Actions✓

Here are the GitHub Actions logs prior to making any changes:

Sandbox logs for d0280cf
Checking autocoder/ai.py for syntax errors... ✅ autocoder/ai.py has no syntax errors! 1/1 ✓
Checking autocoder/ai.py for syntax errors...
✅ autocoder/ai.py has no syntax errors!

Sandbox passed on the latest main, so sandbox checks will be enabled for this issue.


Step 1: 🔎 Searching

I found the following snippets in your repository. I will now analyze these snippets and come up with a plan.

Some code snippets I think are relevant in decreasing order of relevance (click to expand). If some file is missing from here, you can mention the path in the ticket description. https://github.com/artmoskvin/autocoder/blob/d0280cfec5ad26c72ccbcac20985fdcea66474af/autocoder/ai.py#L15-L21 https://github.com/artmoskvin/autocoder/blob/d0280cfec5ad26c72ccbcac20985fdcea66474af/autocoder/agent/plan.py#L35-L72

Step 2: ⌨️ Coding

--- 
+++ 
@@ -20,3 +20,17 @@
     def call(self, messages: List[BaseMessage]) -> str:
         print_system_msg(f"Calling AI with prompt:\n{pprint_messages(messages)}")
         return self.model(messages).content
+    def stream_call(self, initial_messages: List[BaseMessage]) -> Generator[str, List[BaseMessage], None]:
+        print_system_msg(f"Starting streaming AI with initial prompt:\n{pprint_messages(initial_messages)}")
+        partial_completion = self.model(initial_messages).content
+        yield partial_completion
+        while True:
+            try:
+                new_messages = yield
+                if new_messages:
+                    print_system_msg(f"Received new messages for streaming AI:\n{pprint_messages(new_messages)}")
+                    partial_completion = self.model(new_messages).content
+                    yield partial_completion
+            except GeneratorExit:
+                print_system_msg("Streaming AI call ended.")
+                break

Ran GitHub Actions for 3cefbbefdbf420c54d380fb23cc796886970c00e:

--- 
+++ 
@@ -1,5 +1,5 @@
 import ast
-from typing import List
+from typing import List, Generator

 from langchain.schema import SystemMessage, HumanMessage, AIMessage

@@ -62,13 +62,18 @@
     def generate_questions(self) -> List[str]:
         self.chat.append(SystemMessage(content=QUESTIONS_PROMPT))
         print_autocoder_msg("Thinking... :thinking_face:")
-        questions_str = self.ai.call(self.chat)
+        questions_str = "[]"
+        for partial_questions_str in self.ai.stream_call(self.chat):
+            questions_str = partial_questions_str
         return ast.literal_eval(questions_str)

     def generate_plan_from_chat(self) -> str:
         self.chat.append(SystemMessage(content=PLAN_PROMPT))
         print_autocoder_msg("Thinking... :thinking_face:")
-        return self.ai.call(self.chat)
+        plan = ""
+        for partial_plan in self.ai.stream_call(self.chat):
+            plan += partial_plan
+        return plan

     def ask_questions(self, questions: List[str]) -> None:
         while questions:

Ran GitHub Actions for a0c3ffddd52c7cfb90111567beb98cd6bcb96cd1:


Step 3: 🔁 Code Review

I have finished reviewing the code for completeness. I did not find errors for sweep/enable_chat_completion_streaming.


🎉 Latest improvements to Sweep:
  • New dashboard launched for real-time tracking of Sweep issues, covering all stages from search to coding.
  • Integration of OpenAI's latest Assistant API for more efficient and reliable code planning and editing, improving speed by 3x.
  • Use the GitHub issues extension for creating Sweep issues directly from your editor.

💡 To recreate the pull request edit the issue title or description. To tweak the pull request, leave a comment on the pull request.Something wrong? Let us know.

This is an automated message generated by Sweep AI.