AIND library for generative (RL) and descriptive (logistic regression) models of dynamic foraging tasks.
User documentation available on readthedocs.
RL agents that can perform any dynamic foraging task in aind-behavior-gym and can fit behavior using MLE.
DynamicForagingAgentMLEBase
.ForagerQLearning
: Simple Q-learning agents that incrementally update Q-values.
agent_kwargs
:
number_of_learning_rate: Literal[1, 2] = 2,
number_of_forget_rate: Literal[0, 1] = 1,
choice_kernel: Literal["none", "one_step", "full"] = "none",
action_selection: Literal["softmax", "epsilon-greedy"] = "softmax",
ForagerLossCounting
: Loss counting agents with probabilistic loss_count_threshold
.
agent_kwargs
:
win_stay_lose_switch: Literal[False, True] = False,
choice_kernel: Literal["none", "one_step", "full"] = "none",
Here is the full list of available foragers:
Play with the generative models here.
$$ logit(p(c_r)) \sim RewardedChoice+UnrewardedChoice $$
$$ logit(p(c_r)) \sim RewardedChoice+Choice $$
$$ logit(p(c_r)) \sim RewardedChoice+UnrewardedChoice+Choice $$
$$ logit(p(c_r)) \sim Choice + Reward+ Choice*Reward $$
choice | reward | Choice | Reward | RewardedChoice | UnrewardedChoice | Choice * Reward |
---|---|---|---|---|---|---|
L | yes | -1 | 1 | -1 | 0 | -1 |
L | no | -1 | -1 | 0 | -1 | 1 |
R | yes | 1 | 1 | 1 | 0 | 1 |
L | yes | -1 | 1 | -1 | 0 | -1 |
R | no | 1 | -1 | 0 | 1 | -1 |
R | yes | 1 | 1 | 1 | 0 | 1 |
L | no | -1 | -1 | 0 | -1 | 1 |
Some observations:
Su 2022 | Bari 2019 | Hattori 2019 | Miller 2021 | |
---|---|---|---|---|
Equivalent to | RewC + UnrC | RewC + (RewC + UnrC) | RewC + UnrC + (RewC + UnrC) | (RewC + UnrC) + (RewC - UnrC) + Rew |
Severity of multicollinearity | Not at all | Medium | Severe | Slight |
Interpretation | Like a RL model with different learning rates on reward and unrewarded trials. | Like a RL model that only updates on rewarded trials, plus a choice kernel (tendency to repeat previous choices). | Like a RL model that has different learning rates on reward and unrewarded trials, plus a choice kernel (the full RL model from the same paper). | Like a RL model that has symmetric learning rates for rewarded and unrewarded trials, plus a choice kernel. However, the $Reward $ term seems to be a strawman assumption, as it means “if I get reward on any side, I’ll choose the right side more”, which doesn’t make much sense. |
Conclusion | Probably the best | Okay | Not good due to the severe multicollinearity | Good |
The choice of optimizer depends on the penality term, as listed here.
lbfgs
- [l2
, None]liblinear
- [l1
, l2
]newton-cg
- [l2
, None]newton-cholesky
- [l2
, None]sag
- [l2
, None]saga
- [elasticnet
, l1
, l2
, None]To install the software, run
pip install aind-dynamic-foraging-models
To develop the code, clone the repo to your local machine, and run
pip install -e .[dev]
There are several libraries used to run linters, check documentation, and run tests.
coverage run -m unittest discover && coverage report
interrogate .
Use flake8 to check that code is up to standards (no unused imports, etc.):
flake8 .
Use black to automatically format the code into PEP standards:
black .
Use isort to automatically sort import statements:
isort .
For internal members, please create a branch. For external members, please fork the repository and open a pull request from the fork. We'll primarily use Angular style for commit messages. Roughly, they should follow the pattern:
<type>(<scope>): <short summary>
where scope (optional) describes the packages affected by the code changes and type (mandatory) is one of:
The table below, from semantic release, shows which commit message gets you which release type when semantic-release
runs (using the default configuration):
Commit message | Release type |
---|---|
fix(pencil): stop graphite breaking when too much pressure applied |
|
feat(pencil): add 'graphiteWidth' option |
|
perf(pencil): remove graphiteWidth option BREAKING CHANGE: The graphiteWidth option has been removed. The default graphite width of 10mm is always used for performance reasons. |
(Note that the BREAKING CHANGE: token must be in the footer of the commit) |
To generate the rst files source files for documentation, run
sphinx-apidoc -o doc_template/source/ src
Then to create the documentation HTML files, run
sphinx-build -b html doc_template/source/ doc_template/build/html
More info on sphinx installation can be found here.