Closed jlewi closed 1 month ago
Name | Link |
---|---|
Latest commit | 015270ecc08d5542ea132e37af4f6b2fbfc39a01 |
Latest deploy log | https://app.netlify.com/sites/foyle/deploys/66f6de26ac55740008023e29 |
Deploy Preview | https://deploy-preview-253--foyle.netlify.app |
Preview on mobile | Toggle QR Code...Use your smartphone camera to open QR code link. |
To edit notification comments on pull requests, go to your Netlify site configuration.
Use Simulation For Evaluation
This PR completely overhauls how we do evaluation as outlined in TN011EVALDATA
One of the major pain points in our approach to evaluation has been building up a sufficiently large dataset for evaluation. This PR solves this problem by using examples generated from sessions produced by actual usage. This ensures that the more we use Foyle the more data we have available for evaluation.
Another challenge for evaluation has been what do we use for our set of learned examples during evaluation? Using actual sessions solves this problem because sessions are ordered in time. During evaluation we start out with no learned examples. We then replay the sessions in the same order the occurred. Foyle can then learn from those sessions using its learning process to improve accuracy on subsequent examples.
Making the Evaluator a Simulator
In order to achieve this we redo our Evaluator to act more like a simulator that simulates what a user would do by using the sessions as examples of intent and actions.
We refactor the Evaluator to follow the pattern we first used in the AssertJob of having the experiment driver (the evaluator) interact with the Agent via RPC. This makes it easy to setup and configure an independent instance of the Agent with the suitable parameters for the experiment.
Use sqlite for storing the results
We rewrite the evaluator to use sqlite to story the evaluation results rather than using pebble. This gives much better querying capabilities for exploring the evaluation results.
We store the EvalResult proto in JSON not binary format so that we can use sqlite's capabilities to query the data.
Level 1 Evals
This PR deletes the Assertor code because it is rendered out of data by all the changes. In a subsequent PR we should integration the level 1 assertions into the evaluator.
Tracked in #261
Code Cleanup
Delete code for computing the distance between expected and actual programs. We have switched to LLM as judge. That metric is likely not useful anymore because generated code are often multi-line mini programs that the metric couldn't handle.
Delete the data/eval directory. These were handcrafted evaluation examples expressed as markdown files. With this PR we are making two changes
Fix #140