artmoskvin / autocoder

Coding agent prototype
0 stars 0 forks source link

Enable response streaming #26

Open artmoskvin opened 4 months ago

artmoskvin commented 4 months ago

Currently, the AI class generates only batch completions. So we have to wait until the whole completion is generates until we can send it back to user. A common way to improve UX is to stream generated tokens as soon as they are returned from the model. Let's add a new stream method in the AI class that calls BaseChatModel.stream(...) -> Iterator[BaseMessageChunk] under the hood.

devbuddytest[bot] commented 4 months ago

To enable response streaming in the AI class, we'll follow these steps:

  1. Understand the Current Implementation:

    • The AI class currently has a call method that takes a list of messages and returns a string. This method internally calls the model (an instance of BaseChatModel) with the messages and returns the content.
    • The BaseChatModel is imported from langchain.chat_models.base.
  2. Plan for Streaming Implementation:

    • We need to add a new method in the AI class that allows streaming of generated tokens. This method will utilize the stream method of the BaseChatModel if available.
  3. Implementation Steps:

    • Step 1: Verify if the BaseChatModel has a stream method or any equivalent that supports streaming. If not, we might need to extend or modify the BaseChatModel to support streaming.
    • Step 2: Add a new method in the AI class, possibly named stream, that will handle the streaming of responses. This method should be designed to call the stream method of the BaseChatModel and yield results as they become available.
    • Step 3: Ensure that the new stream method in the AI class accepts the same type of input as the call method, i.e., a list of BaseMessage objects.
    • Step 4: Implement error handling within the stream method to manage any exceptions that might occur during the streaming process.
    • Step 5: Update any relevant documentation or comments within the code to reflect the addition of the new streaming capability.
    • Step 6: Write unit tests to verify that the streaming functionality works as expected. This includes testing for partial outputs, error handling, and ensuring that the stream closes properly when the data is fully consumed.
  4. Review and Testing:

    • Review the implementation with the team to ensure it meets the requirements and follows best practices.
    • Conduct thorough testing, including integration tests if applicable, to ensure the new streaming functionality integrates well with the existing system.
  5. Deployment:

    • Once approved, merge the changes into the main branch and deploy the updated version.

Let's discuss this plan with the team to get their input and approval before proceeding with the implementation.

artmoskvin commented 4 months ago

@devbuddytest Sounds good! I can confirm that the BaseChatModel has a stream method that returns Iterator[BaseMessageChunk]

devbuddytest[bot] commented 4 months ago

Great! Since the BaseChatModel already has a stream method that returns Iterator[BaseMessageChunk], we can proceed with the implementation plan as outlined, focusing on integrating this streaming capability into the AI class. Here's the refined plan based on the confirmation:

  1. Add Stream Method to AI Class:

    • Implement a new method named stream in the AI class. This method will leverage the stream method from the BaseChatModel to yield BaseMessageChunk objects as they are generated.
  2. Method Signature and Implementation:

    • The stream method in the AI class should accept the same parameters as the existing call method to ensure consistency.
    • Inside the stream method, call the stream method of the BaseChatModel instance, passing the necessary parameters.
    • Use a loop or iterator mechanism to yield each BaseMessageChunk as it becomes available from the BaseChatModel.stream method.
  3. Error Handling:

    • Implement error handling within the stream method to gracefully manage any exceptions or errors that might occur during the streaming process.
  4. Unit Tests:

    • Write unit tests for the new stream method to ensure it behaves as expected. This includes testing for correct streaming of message chunks, handling of errors, and proper termination of the stream.
  5. Documentation:

    • Update the documentation within the AI class to include information about the new stream method, its purpose, and how to use it.
  6. Review and Testing:

    • Review the implementation with the team for feedback and approval.
    • Conduct thorough testing, including integration tests if necessary, to ensure the streaming functionality works seamlessly with the existing system.
  7. Deployment:

    • After approval and successful testing, merge the changes into the main branch and deploy the updated version.

I'll proceed with implementing this plan. Let's start by adding the stream method to the AI class.

artmoskvin commented 4 months ago

@devbuddytest go for it

devbuddytest[bot] commented 4 months ago

The pull request to enable response streaming in the AI class has been successfully created. You can review and merge the changes via the following link: Enable Response Streaming in AI Class Pull Request.

This pull request introduces a new stream method in the AI class, enabling the streaming of responses from the AI model. This method leverages the stream method of the BaseChatModel, yielding BaseMessageChunk objects as they are generated, aimed at improving the user experience by providing real-time responses.