Open DannyWeitekamp opened 3 years ago
Related to #57, which has always been my big concern with testing whole agents. Would be great to have someone go through and try to catalog all the things that use some kind of random choice and see if we can set some master seed.
@cmaclell Don't you have something like the virtual tutor idea in the works? One concern with some of that would be over fitting to a testcase. Might be nice to have some kind of regular batch process (probably not using current CI tools but maybe) that re-runs some of our canonical experiments and reports fit to human data just so we can see cases when we totally break something there.
Made a new issue for CI #74
I do, it is located here: https://gitlab.cci.drexel.edu/teachable-ai-lab/tutorenvs https://gitlab.cci.drexel.edu/teachable-ai-lab/tutorenvs
You might get overfitting, but it does currently randomly generate problems / orders.
Best wishes,
Chris
On Dec 7, 2020, at 3:04 PM, Erik Harpstead notifications@github.com wrote:
@cmaclell https://github.com/cmaclell Don't you have something like the virtual tutor idea in the works? One concern with some of that would be over fitting to a testcase. Might be nice to have some kind of regular batch process (probably not using current CI tools but maybe) that re-runs some of our canonical experiments and reports fit to human data just so we can see cases when we totally break something there.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/apprenticelearner/AL_Core/issues/72#issuecomment-740149273, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAEHALEMTQNBP5VKMEO6WQLSTUYOTANCNFSM4UQZVDJQ.
So beyond some of the issues with why this will be hard to implement what would be some good tests? Some I can think of:
I guess part of this what high level assumptions do we want to make about all agents? Some that seem reasonable, though might not apply to all agent types or goals;
I'm not entirely clear on what you both mean by overfitting, the agent overfits or there is some fitting in the environment?
Thanks @cmaclell these seem like a step in the right direction I'll take a closer look.
@eharpste these are all things that would be good to incorporate. For the non-deep learning based agents (or at least ModularAgent) expanding on 1, it would be nice to test things that are inside the agent (more of the unit test variety) beyond behavior like:
I meant overfitting in a general software engineering sense not an ML sense. Basically, we don't want to stay myopically focused on the few test cases we define in cases where things might change as we explore new directions. For example, there are some known cases where the blocked vs interleaved effect should invert.
Ahh I see. The unit tests I'm suggesting would be written on an agent by agent basis. For a given implementation there are a set of intended behaviors that should be directly enforced via unit tests. If the intended behaviors change then the unit tests should change, but if they don't change then they should still pass regardless of implementation changes or additions.
There are a lot of bugs we have run into with regard to different behavior in different implementation/generations of code, and these issues are pretty hard to track down. I would like to move toward having a way to unittest agents.
The way we 'test' agents right now involves running the agents with altrain (which spins up another server that hosts a tutor in the browser) and then in the short term we look a the general behavior of the agent transactions as they are printed in the terminal, and maybe we additionally print out some of what is happening internally with the agent. And in the long term we look at the learning curves, which usually involves first doing some pre-processing/KC labeling and uploading to datashop.
It would be nice to have unittests at the level of "the agent is at this point in training with these skills, and we'll give it these interactions, and we expect this to happen".
Some impediments to doing this: 1) Modularity: At the moment, the most straightforward way to run AL is via altrain, but this is more of an integration test, it requires spinning up a new process that utilizes a completely different library (that we have kept seperate from AL_Core for good reason). 2) Performance: right now (ballpark estimate) AL_Core is 50% of training time and AL_Train is another 50% and together all that back and forth takes several minutes per agent. It would be nice if our tests ran on the order of seconds to make iterating on the code faster. 3) No robust way to "Construct" a skill(s): Skills at least in the ModularAgent right now are kind of a hodgepodge of learning mechanisms that are linked together. They are also learned, not defined. It would be nice if there was some language for writing and representing Skills. 4) Randomness: Some of the process of learning skills is random so this may need to be controlled some way.
I've been flirting with the idea of having a sort of virtual tutor that is just a knowledge-base that gets updated in python, executes the tutor logic, and calls request/train. This would at least address 1) and maybe also 2).