Smol-ai generates a lot of awesome code, but this code cannot be executed for a variety of reasons. I propose creating a test-driven framework where unit tests are written and executed by tester subagents and end to end tests are executed by a meta agent. This will ensure high-quality code that is executable.
Feature Description
I will attempt to build in the learnings from my own projects, self-improving-ai and gpt-action-builder, by specifying that tests must be created for each file, executing these tests and passing the observation of the test execution back to the coding agent for fixes. Perhaps a CAMEL approach with one testing agent and one coding agent would be most effective.
I've begun playing with this and hope to have a pull-request soon. Open to any suggestions about how to structure it!
Summary
Smol-ai generates a lot of awesome code, but this code cannot be executed for a variety of reasons. I propose creating a test-driven framework where unit tests are written and executed by tester subagents and end to end tests are executed by a meta agent. This will ensure high-quality code that is executable.
Feature Description
I will attempt to build in the learnings from my own projects, self-improving-ai and gpt-action-builder, by specifying that tests must be created for each file, executing these tests and passing the observation of the test execution back to the coding agent for fixes. Perhaps a CAMEL approach with one testing agent and one coding agent would be most effective.
I've begun playing with this and hope to have a pull-request soon. Open to any suggestions about how to structure it!