It occurs to me that we are creating some incredible training data by virtue of using aider and the LLM to iterate through the creation, testing and debug of code. Each of these cycles that ends successfully contains dozens of little nuggets that an AI can learn from to get better at coding, testing and debugging and that we can use to help write better prompts by carefully analyzing what was requested and what actually happed.
I'm just white boarding here, my knowledge level is not such that I can contribute to the development of such a thing (other than with my usage data, which I am happy to contribute), but I think this may be worth a discussion and exploration.
Issue
It occurs to me that we are creating some incredible training data by virtue of using aider and the LLM to iterate through the creation, testing and debug of code. Each of these cycles that ends successfully contains dozens of little nuggets that an AI can learn from to get better at coding, testing and debugging and that we can use to help write better prompts by carefully analyzing what was requested and what actually happed.
I'm just white boarding here, my knowledge level is not such that I can contribute to the development of such a thing (other than with my usage data, which I am happy to contribute), but I think this may be worth a discussion and exploration.
Version and model info
All.