Open artmoskvin opened 7 months ago
143dff3bc8
)[!TIP] I can email you next time I complete a pull request if you set up your email here!
Here are the GitHub Actions logs prior to making any changes:
d0280cf
Checking autocoder/ai.py for syntax errors... ✅ autocoder/ai.py has no syntax errors!
1/1 ✓Checking autocoder/ai.py for syntax errors... ✅ autocoder/ai.py has no syntax errors!
Sandbox passed on the latest main
, so sandbox checks will be enabled for this issue.
I found the following snippets in your repository. I will now analyze these snippets and come up with a plan.
autocoder/ai.py
✓ https://github.com/artmoskvin/autocoder/commit/3cefbbefdbf420c54d380fb23cc796886970c00e Edit
Modify autocoder/ai.py with contents:
• Add a new method `stream_call` to the `AI` class. This method will be responsible for handling streaming completions. It should accept an initial part of the messages and yield results incrementally as more messages are received or as the AI generates responses.
• The method signature should be: `def stream_call(self, initial_messages: List[BaseMessage]) -> Generator[str, List[BaseMessage], None]:`
• Inside `stream_call`, implement logic to handle initial messages and yield partial completions. The method should be designed to accept additional messages to continue generating further completions as needed.
• This change allows the AI to support streaming by processing and responding to messages in a more dynamic and interactive manner.
--- +++ @@ -20,3 +20,17 @@ def call(self, messages: List[BaseMessage]) -> str: print_system_msg(f"Calling AI with prompt:\n{pprint_messages(messages)}") return self.model(messages).content + def stream_call(self, initial_messages: List[BaseMessage]) -> Generator[str, List[BaseMessage], None]: + print_system_msg(f"Starting streaming AI with initial prompt:\n{pprint_messages(initial_messages)}") + partial_completion = self.model(initial_messages).content + yield partial_completion + while True: + try: + new_messages = yield + if new_messages: + print_system_msg(f"Received new messages for streaming AI:\n{pprint_messages(new_messages)}") + partial_completion = self.model(new_messages).content + yield partial_completion + except GeneratorExit: + print_system_msg("Streaming AI call ended.") + break
autocoder/ai.py
✓ Edit
Check autocoder/ai.py with contents:
Ran GitHub Actions for 3cefbbefdbf420c54d380fb23cc796886970c00e:
autocoder/agent/plan.py
✓ https://github.com/artmoskvin/autocoder/commit/a0c3ffddd52c7cfb90111567beb98cd6bcb96cd1 Edit
Modify autocoder/agent/plan.py with contents:
• Modify the `generate_plan` and `generate_questions` methods to utilize the new `stream_call` method from the `AI` class for a streaming approach to generating plans and asking questions.
• In `generate_questions`, replace the call to `self.ai.call(self.chat)` with a loop that iterates over `self.ai.stream_call(self.chat)` to process questions in a streaming fashion.
• Similarly, in `generate_plan_from_chat`, replace the call to `self.ai.call(self.chat)` with a loop that iterates over `self.ai.stream_call(self.chat)` for generating the plan.
• These modifications will enable the `Plan` class to interact with the AI in a streaming manner, improving responsiveness and user experience by providing incremental updates and feedback as the AI processes the chat.
--- +++ @@ -1,5 +1,5 @@ import ast -from typing import List +from typing import List, Generator from langchain.schema import SystemMessage, HumanMessage, AIMessage @@ -62,13 +62,18 @@ def generate_questions(self) -> List[str]: self.chat.append(SystemMessage(content=QUESTIONS_PROMPT)) print_autocoder_msg("Thinking... :thinking_face:") - questions_str = self.ai.call(self.chat) + questions_str = "[]" + for partial_questions_str in self.ai.stream_call(self.chat): + questions_str = partial_questions_str return ast.literal_eval(questions_str) def generate_plan_from_chat(self) -> str: self.chat.append(SystemMessage(content=PLAN_PROMPT)) print_autocoder_msg("Thinking... :thinking_face:") - return self.ai.call(self.chat) + plan = "" + for partial_plan in self.ai.stream_call(self.chat): + plan += partial_plan + return plan def ask_questions(self, questions: List[str]) -> None: while questions:
autocoder/agent/plan.py
✓ Edit
Check autocoder/agent/plan.py with contents:
Ran GitHub Actions for a0c3ffddd52c7cfb90111567beb98cd6bcb96cd1:
I have finished reviewing the code for completeness. I did not find errors for sweep/enable_chat_completion_streaming
.
💡 To recreate the pull request edit the issue title or description. To tweak the pull request, leave a comment on the pull request.Something wrong? Let us know.
This is an automated message generated by Sweep AI.
Currently, the
AI
class handles only batch completions. To improve UX it should also support streaming.Checklist
- [X] Modify `autocoder/ai.py` ✓ https://github.com/artmoskvin/autocoder/commit/3cefbbefdbf420c54d380fb23cc796886970c00e [Edit](https://github.com/artmoskvin/autocoder/edit/sweep/enable_chat_completion_streaming/autocoder/ai.py#L16-L21) - [X] Running GitHub Actions for `autocoder/ai.py` ✓ [Edit](https://github.com/artmoskvin/autocoder/edit/sweep/enable_chat_completion_streaming/autocoder/ai.py#L16-L21) - [X] Modify `autocoder/agent/plan.py` ✓ https://github.com/artmoskvin/autocoder/commit/a0c3ffddd52c7cfb90111567beb98cd6bcb96cd1 [Edit](https://github.com/artmoskvin/autocoder/edit/sweep/enable_chat_completion_streaming/autocoder/agent/plan.py#L35-L72) - [X] Running GitHub Actions for `autocoder/agent/plan.py` ✓ [Edit](https://github.com/artmoskvin/autocoder/edit/sweep/enable_chat_completion_streaming/autocoder/agent/plan.py#L35-L72)