This repository contains a runtime environment for models of the auditory periphery, brainstem, and midbrain. The model(s) are adapted from Verhulst et. al (2012,15), Nelson and Carney (2004), Carney (2015), and Zilany and Bruce (2014).
It is implemented in Python >=3.3, and currently supports OSX, Linux, and 64-bit Windows.
The goal of this work is to explore the effects of Auditory Neuropathy on representations of complex sounds throughout the early stages of the auditory system.
One model of the auditory periphery is simulated using the model developed by Verhulst et. al., hosted here. This model will not be available without that module installed; contact @gvoysey for access.
This code may be cited as:
Graham Voysey et al.. (2016). Corti: version 0.9. Zenodo. 10.5281/zenodo.57111
This repository may be installed with pip
, the python package manager:
pip install git+https://github.com/gvoysey/thesis-code.git@<TAG>
where <TAG>
is a valid release
or @master
, to get the latest build from the master branch.
If you have cloned this repository locally and are running it in a virtual environment (like you should be!), you can also install it from the cloned repo for development purposes.
If you plan on developing this model further, please fork this repo and send me (@gvoysey) a pull request when you want me to integrate the changes upstream.
The easy way to install this repo locally is with the command env/bin/pip install git+file:///path/to/your/git/repo@mybranch
or @mytag
.
stimulus_generator --help
configure stimuli from WAV files or generate stimuli configuration templates.
corti --help
load stimuli, configure model parameters, run model, plot output.