vak
is a Python framework for neural network models,
designed for researchers studying acoustic communication:
how and why animals communicate with sound.
Many people will be familiar with work in this area on
animal vocalizations such as birdsong, bat calls, and even human speech.
Neural network models have provided a powerful new tool for researchers in this area,
as in many other fields.
The library has two main goals:
Currently, the main use is an automatic annotation of vocalizations and other animal sounds. By annotation, we mean something like the example of annotated birdsong shown below:
You give vak
training data in the form of audio or spectrogram files with annotations,
and then vak
helps you train neural network models
and use the trained models to predict annotations for new files.
We developed vak
to benchmark a neural network model we call tweetynet
.
Please see the eLife article here: https://elifesciences.org/articles/63853
To learn more about the goals and design of vak, please see this talk from the SciPy 2023 conference, and the associated Proceedings paper here.
For more background on animal acoustic communication and deep learning, and how these intersect with related fields like computational ethology and neuroscience, please see the "About" section below.
Short version:
pip
$ pip install vak
conda
$ conda install vak -c pytorch -c conda-forge
$ # ^ notice additional channel!
Notice that for conda
you specify two channels,
and that the pytorch
channel should come first,
so it takes priority when installing the dependencies pytorch
and torchvision
.
For more details, please see:
https://vak.readthedocs.io/en/latest/get_started/installation.html
We test vak
on Ubuntu and MacOS. We have run on Windows and
know of other users successfully running vak
on that operating system,
but installation on Windows may require some troubleshooting.
A good place to start is by searching the issues.
Currently the easiest way to work with vak
is through the command line.
You run it with configuration files, using one of a handful of commands.
For more details, please see the "autoannotate" tutorial here:
https://vak.readthedocs.io/en/latest/get_started/autoannotate.html
vak
?Please see the How-To Guides in the documentation here:
https://vak.readthedocs.io/en/latest/howto/index.html
For help, please begin by checking out the Frequently Asked Questions:
https://vak.readthedocs.io/en/latest/faq.html.
To ask a question about vak, discuss its development,
or share how you are using it,
please start a new "Q&A" topic on the VocalPy forum
with the vak tag:
https://forum.vocalpy.org/
To report a bug, or to request a feature,
please use the issue tracker on GitHub:
https://github.com/vocalpy/vak/issues
For a guide on how you can contribute to vak
, please see:
https://vak.readthedocs.io/en/latest/development/index.html
If you use vak for a publication, please cite both the Proceedings paper and the software.
@inproceedings{nicholson2023vak,
title={vak: a neural network framework for researchers studying animal acoustic communication},
author={Nicholson, David and Cohen, Yarden},
booktitle={Python in Science Conference},
pages={59--67},
year={2023}
}
is here.
Are humans unique among animals?
We speak languages, but is speech somehow like other animal behaviors, such as birdsong?
Questions like these are answered by studying how animals communicate with sound.
This research requires cutting edge computational methods and big team science across a wide range of disciplines,
including ecology, ethology, bioacoustics, psychology, neuroscience, linguistics, and genomics ^1^3.
As in many other domains, this research is being revolutionized by deep learning algorithms ^1^3.
Deep neural network models enable answering questions that were previously impossible to address,
in part because these models automate analysis of very large datasets.
Within the study of animal acoustic communication, multiple models have been proposed for similar tasks,
often implemented as research code with different libraries, such as Keras and Pytorch.
This situation has created a real need for a framework that allows researchers to easily benchmark models
and apply trained models to their own data. To address this need, we developed vak.
We originally developed vak to benchmark a neural network model, TweetyNet ^4,
that automates annotation of birdsong by segmenting spectrograms.
TweetyNet and vak have been used in both neuroscience ^6^8 and bioacoustics ^9.
For additional background and papers that have used vak
,
please see: https://vak.readthedocs.io/en/latest/reference/about.html
It has only three letters, so it is quick to type, and it wasn't taken on pypi yet. Also I guess it has something to do with speech. "vak" rhymes with "squawk" and "talk".
Thanks goes to these wonderful people (emoji key):
This project follows the all-contributors specification. Contributions of any kind welcome!