//*** //* SETTINGS **** //***
:doctype: book :use-link-attrs: :linkattrs:
// Github Icons ifdef::env-github[] :tip-caption: :bulb: :note-caption: :information_source: :important-caption: :heavy_exclamation_mark: :caution-caption: :fire: :warning-caption: :warning: endif::[]
// Table of Contents :toc: :toclevels: 2 :toc-title: :toc-placement!: :sectanchors:
// Numbered sections :sectnums: :sectnumlevels: 2
// Links :cc-by-nc-sa: http://creativecommons.org/licenses/by-nc-sa/4.0/
//***** END OF SETTINGS ** //****
// Header ++++
Automated scientific audio data processing and bird ID.
++++ // Badges :license-badge: https://badgen.net/badge/License/CC-BY-NC-SA%204.0/green :os-badge: https://badgen.net/badge/OS/Linux%2C%20Windows%2C%20macOS/blue :species-badge: https://badgen.net/badge/Species/6512/blue :downloads-badge: https://www-user.tu-chemnitz.de/~johau/birdnet_total_downloads_badge.php :reddit-badge: https://img.shields.io/reddit/subreddit-subscribers/BirdNET_Analyzer?style=social // Mail icon from FontAwesome :mail-badge: https://img.shields.io/badge/Mail us!-ccb--birdnet%40cornell.edu-yellow.svg?style=social&logo=data:image/svg%2bxml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCA1MTIgNTEyIj48IS0tISBGb250IEF3ZXNvbWUgUHJvIDYuNC4wIGJ5IEBmb250YXdlc29tZSAtIGh0dHBzOi8vZm9udGF3ZXNvbWUuY29tIExpY2Vuc2UgLSBodHRwczovL2ZvbnRhd2Vzb21lLmNvbS9saWNlbnNlIChDb21tZXJjaWFsIExpY2Vuc2UpIENvcHlyaWdodCAyMDIzIEZvbnRpY29ucywgSW5jLiAtLT48cGF0aCBkPSJNNjQgMTEyYy04LjggMC0xNiA3LjItMTYgMTZ2MjIuMUwyMjAuNSAyOTEuN2MyMC43IDE3IDUwLjQgMTcgNzEuMSAwTDQ2NCAxNTAuMVYxMjhjMC04LjgtNy4yLTE2LTE2LTE2SDY0ek00OCAyMTIuMlYzODRjMCA4LjggNy4yIDE2IDE2IDE2SDQ0OGM4LjggMCAxNi03LjIgMTYtMTZWMjEyLjJMMzIyIDMyOC44Yy0zOC40IDMxLjUtOTMuNyAzMS41LTEzMiAwTDQ4IDIxMi4yek0wIDEyOEMwIDkyLjcgMjguNyA2NCA2NCA2NEg0NDhjMzUuMyAwIDY0IDI4LjcgNjQgNjRWMzg0YzAgMzUuMy0yOC43IDY0LTY0IDY0SDY0Yy0zNS4zIDAtNjQtMjguNy02NC02NFYxMjh6Ii8+PC9zdmc+ image:{license-badge}[CC BY-NC-SA 4.0, link={cc-by-nc-sa}] image:{os-badge}[Supported OS, link=""] image:{species-badge}[Number of species, link=""] image:{downloads-badge}[Downloads, link=""] [.text-center] image:{mail-badge}[Email, link=mailto:ccb-birdnet@cornell.edu, height=25] image:{reddit-badge}[Subreddit subscribers, link="https://reddit.com/r/BirdNET_Analyzer", height=25] ++++++++
[discrete] == Introduction
This repo contains BirdNET models and scripts for processing large amounts of audio data or single audio files. This is the most advanced version of BirdNET for acoustic analyses and we will keep this repository up-to-date with new models and improved interfaces to enable scientists with no CS background to run the analysis.
https://github.com/kahst/BirdNET-Analyzer/releases/download/1.4.0/BirdNET-Analyzer-1.4.0-win_amd64.exe[*Click here to download the Windows installer*] and follow the https://github.com/kahst/BirdNET-Analyzer#setup-windows[setup instructions].
https://github.com/kahst/BirdNET-Analyzer/releases/download/1.4.0/BirdNET-Analyzer-1.4.0-mac_arm64.pkg[*Click here to download the macOS package*].
https://tuc.cloud/index.php/s/2TX59Qda2X92Ppr/download/BirdNET_GLOBAL_6K_V2.4_Model_Raven.zip[*Download the newest Raven model here*] and follow the https://github.com/kahst/BirdNET-Analyzer#setup-raven-pro[setup instructions].
Feel free to use BirdNET for your acoustic analyses and research. If you do, please cite as:
This work is licensed under a {cc-by-nc-sa}[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License].
[discrete] == About
Developed by the https://www.birds.cornell.edu/ccb/[K. Lisa Yang Center for Conservation Bioacoustics] at the https://www.birds.cornell.edu/home[Cornell Lab of Ornithology] in collaboration with https://www.tu-chemnitz.de/index.html.en[Chemnitz University of Technology].
Go to https://birdnet.cornell.edu to learn more about the project.
Want to use BirdNET to analyze a large dataset? Don't hesitate to contact us: ccb-birdnet@cornell.edu
We also have a discussion forum on https://reddit.com/r/BirdNET_Analyzer[Reddit] if you have a general question or just want to chat.
Have a question, remark, or feature request? Please start a new issue thread to let us know. Feel free to submit a pull request.
[discrete] == Contents toc::[]
== Usage guide
This document provides instructions for downloading and installing the GUI, and conducting some of the most common types of analyses. Within the document, a link is provided to download example sound files that can be used for practice.
Download the PDF here: https://zenodo.org/records/8357176[BirdNET-Analyzer Usage Guide]
Watch our presentation on how to use BirdNET-Analyzer to train your own models: https://youtu.be/HuEZGIPeyq0[BirdNET - BioacousTalks at YouTube]
== Showroom
BirdNET powers a number of fantastic community projects dedicated to bird song identification, all of which use models from this repository. These are some highlights, make sure to check them out!
.Community projects [cols="~,~", options="header"] |=== | Project | Description
image:https://tuc.cloud/index.php/s/cDqtQxo8yMRkNYP/download/logo_box_loggerhead.png[HaikuBox,300,link=https://haikubox.com] |
---|
HaikuBox + Once connected to your WiFi, Haikubox will listen for birds 24/7. When BirdNET finds a match between its thousands of labeled sounds and the birdsong in your yard, it identifies the bird species and shares a three-second audio clip to the Haikubox website and smartphone app.
Learn more at: https://haikubox.com[HaikuBox.com]
| image:https://tuc.cloud/index.php/s/WKCZoE9WSjimDoe/download/logo_box_birdnet-pi.png[BirdNET-PI,300,link=https://birdnetpi.com] | BirdNET-Pi + Built on the TFLite version of BirdNET, this project uses pre-built TFLite binaries for Raspberry Pi to run on-device sound analyses. It is able to recognize bird sounds from a USB sound card in realtime and share its data with the rest of the world.
Note: You can find the most up-to-date version of BirdNET-PI at https://github.com/Nachtzuster/BirdNET-Pi[github.com/Nachtzuster/BirdNET-Pi]
Learn more at: https://birdnetpi.com[BirdNETPi.com]
| image:https://tuc.cloud/index.php/s/jDtyG9W36WwKpbR/download/logo_box_birdweather.png[BirdWeather,300,link=https://app.birdweather.com] | BirdWeather + This site was built to be a living library of bird vocalizations. Using the BirdNET artificial neural network, BirdWeather is continuously listening to over 1,000 active stations around the world in real-time.
Learn more at: https://app.birdweather.com[BirdWeather.com]
| image:https://tuc.cloud/index.php/s/kqT7GXXzfDs3NyA/download/birdnetlib-logo.png[birdnetlib,300,link=https://joeweiss.github.io/birdnetlib/]
| birdnetlib +
A python api for BirdNET-Analyzer and BirdNET-Lite. birdnetlib
provides a common interface for BirdNET-Analyzer and BirdNET-Lite.
Learn more at: https://joeweiss.github.io/birdnetlib/[github.io/birdnetlib]
| image:https://tuc.cloud/index.php/s/zpNkXJq7je3BKNE/download/logo_box_ecopi_bird.png[ecoPI:Bird,300,link=https://oekofor.netlify.app/en/portfolio/ecopi-bird_en/] | ecoPi:Bird + The ecoPi:Bird is a device for automated acoustic recordings of bird songs and calls, with a self-sufficient power supply. It facilitates economical long-term monitoring, implemented with minimal personal requirements.
Learn more at: https://oekofor.netlify.app/en/portfolio/ecopi-bird_en/[oekofor.netlify.app]
| image:https://tuc.cloud/index.php/s/HQiPxG2rKbmDb64/download/dawn_chorus_logo.png[DawnChorus,300,link=https://dawn-chorus.org/en/] | Dawn Chorus + Dawn Chorus invites global participation to record bird sounds for biodiversity research, art, and raising awareness. This project aims to sharpen our senses and creativity by connecting us more deeply with the wonders of nature.
Learn more at: https://dawn-chorus.org/en/[dawn-chorus.org]
| image:https://tuc.cloud/index.php/s/M27nZ4LmNaNEKMg/download/chirpity_logo.png[Chirpity,300,link=https://chirpity.mattkirkland.co.uk] | Chirpity + Chirpity is a desktop application available for Windows, Mac and Linux platforms. Optimized for speed and ease of use, it can analyze anything from short clips to hundreds of hours of audio with unparalleled speed. Detections can be validated against reference calls, edited and saved to a call library. Results can also be exported to a variety of formats including CSV, Raven and eBird.
Learn more at: https://chirpity.mattkirkland.co.uk[chirpity.mattkirkland.co.uk]
| image:https://raw.githubusercontent.com/tphakala/birdnet-go/main/doc/BirdNET-Go-logo.webp[Go-BirdNET,300,link=https://github.com/tphakala/go-birdnet] | Go-BirdNET + Go-BirdNET is an application inspired by BirdNET-Analyzer. While the original BirdNET is based on Python, Go-BirdNET is built using Golang, aiming for simplified deployment across multiple platforms, from Windows PCs to single board computers like Raspberry Pi.
Learn more at: https://github.com/tphakala/go-birdnet[github.com/tphakala/go-birdnet]
| image:https://github.com/woheller69/whoBIRD/blob/master/fastlane/metadata/android/en-US/images/icon.png[whoBIRD,300,link=https://github.com/woheller69/whoBIRD] | whoBIRD + whoBIRD empowers you to identify birds anywhere, anytime, without an internet connection. Built upon the TFLite version of BirdNET, this Android application harnesses the power of machine learning to recognize birds directly on your device.
Learn more at: https://github.com/woheller69/whoBIRD[whoBIRD]
| image:https://tuc.cloud/index.php/s/gKEcaoqPEB9MHpp/download/logo_muuttolintujen_kev%C3%A4t.png[Muuttolintujen Kevät,300,link=https://www.jyu.fi/en/research/muuttolintujen-kevat] | Muuttolintujen Kevät + Muuttolintujen Kevät (Migration Birds Spring) is a mobile application developed at the University of Jyväskylä, enabling users to record bird songs and make bird observations using a re-trained version of BirdNET.
Learn more at: https://www.jyu.fi/en/research/muuttolintujen-kevat[jyu.fi]
| image:https://github.com/ssciwr/faunanet/blob/master/faunanet_logo.png[faunanet,300,link=https://github.com/ssciwr/faunanet] | faunanet + faunanet provides a platform for bioacoustics research projects and is an extension of Birdnet-Analyzer based on birdnetlib. faunanet is written in pure Python and is developed by the Scientific Software Center at the University of Heidelberg, Germany.
Learn more at: https://github.com/ssciwr/faunanet[faunanet]
| image:https://github.com/ecomontec/ecoSound-web/blob/master/src/assets/images/ecosound-web_logo_large_white_on_black.png[ecoSound-web,300,link=https://ecosound-web.de/ecosound_web/] | ecoSound-web + ecoSound-web is a web application for ecoacoustics to manage, re-sample, navigate, visualize, annotate, and analyze soundscape recordings. It can execute BirdNET on recording batches and is currently being developed at INRAE, France.
Learn more at: https://f1000research.com/articles/9-1224/v3[F1000Research] and https://github.com/ecomontec/ecoSound-web[GitHub] |===
Other cool projects:
Working on a cool project that uses BirdNET? Let us know and we can feature your project here.
== Projects map
We have created an interactive map of projects that use BirdNET. If you are working on a project that uses BirdNET, please let us know https://github.com/kahst/BirdNET-Analyzer/issues/221[here] and we can add it to the map.
You can access the map here: https://kahst.github.io/BirdNET-Analyzer/projects.html[Open projects map]
== Model version update
[discrete] ==== V2.4, June 2023
You can find a list of previous versions here: https://github.com/kahst/BirdNET-Analyzer/tree/main/checkpoints[BirdNET-Analyzer Model Version History]
[discrete] ==== Species range model V2.4 - V2, Jan 2024
== Technical Details
Model V2.4 uses the following settings:
== Setup === Setup (Raven Pro)
If you want to analyze audio files without any additional coding or package install, you can now use https://ravensoundsoftware.com/software/raven-pro/[Raven Pro software] to run BirdNET models. After download, BirdNET is available through the new "Learning detector" feature in Raven Pro. For more information on how to use this feature, please visit the https://ravensoundsoftware.com/article-categories/learning-detector/[Raven Pro Knowledge Base].
https://tuc.cloud/index.php/s/2TX59Qda2X92Ppr/download/BirdNET_GLOBAL_6K_V2.4_Model_Raven.zip[Download the newest model version here], extract the zip-file and move the extracted folder to the Raven models folder. On Windows, the models folder is C:\Users\<Your user name>\Raven Pro 1.6\Models
. Start Raven Pro and select BirdNET_GLOBAL_6K_V2.4_Model_Raven as learning detector.
=== Setup (Python package)
The easiest way to setup BirdNET on your machine is to install https://joeweiss.github.io/birdnetlib/[birdnetlib] or https://pypi.org/project/birdnet/[birdnet] through pip with:
pip3 install birdnetlib
or
Please take a look at the https://joeweiss.github.io/birdnetlib/#using-birdnet-analyzer[birdnetlib user guide] on how to analyze audio with birdnetlib
.
When using the birdnet
-package, you can run BirdNET with:
from pathlib import Path from birdnet.models import ModelV2M4
model = ModelV2M4()
species_in_area = model.predict_species_at_location_and_time(42.5, -76.45, week=4) predictions = model.predict_species_within_audio_file( Path("soundscape.wav"), filter_species=set(species_in_area.keys()) )
prediction, confidence = list(predictions[(0.0, 3.0)].items())[0] print(f"predicted '{prediction}' with a confidence of {confidence:.6f}")
For more examples and documentation, make sure to visit https://pypi.org/project/birdnet/[pypi.org/project/birdnet/].
For any feature request or questions regarding birdnet
, please add an issue or PR at https://github.com/birdnet-team/birdnet[github.com/birdnet-team/birdnet].
=== Setup (Ubuntu)
Install Python 3.10:
Install TFLite runtime (recommended) or Tensorflow (has to be 2.15):
pip3 install tflite-runtime
OR
Install Librosa to handle audio files:
Clone the repository
=== Setup (Windows)
Before you attempt to setup BirdNET-Analyzer on your Windows machine, please consider downloading our fully-packaged version that does not require you to install any additional packages and can be run "as-is".
You can download this version here: https://github.com/kahst/BirdNET-Analyzer/releases/download/1.4.0/BirdNET-Analyzer-1.4.0-win_amd64.exe[BirdNET-Analyzer Windows]
. Download the https://github.com/kahst/BirdNET-Analyzer/releases/download/1.4.0/BirdNET-Analyzer-1.4.0-win_amd64.exe[*BirdNET-Analyzer-setup.exe*] file
. Before installing, make sure to right-click the exe-file, select "Properties" and check the box "Unblock" under "Security" at the bottom of the "General" tab.
* If Windows does not display this option, the file can be unblocked with the PowerShell 7 command Unblock-File -Path .\BirdNET-Analyzer.zip
. During installation, you may see a warning "Windows protected your PC"* due to the lack of a digital signature. Simply select "More info" and then "Run anyway" to proceed with the installation.
. Follow the on-screen instructions
. After installation, click the desktop icon or navigate to the extracted folder at C:\Users\<Your user name>\AppData\Local\Programs\BirdNET-Analyzer
. You can start the analysis through the command prompt with +BirdNET-Analyzer.exe --i "path\to\folder" ...+
(see <<usage-cli,Usage (CLI) section>> for more details), or you can launch BirdNET-Analyzer-GUI.exe
to start the analysis through a basic GUI.
For more advanced use cases (e.g., hosting your own API server), follow these steps to set up BirdNET-Analyzer on your Windows machine:
Install Python 3.10 or higher (has to be 64bit version)
WARNING: :exclamation:Make sure to check: ☑ "Add path to environment variables" during install:exclamation:
Install Tensorflow (has to be 2.5 or later), Librosa and NumPy
Win + S
type "command" and click on "Command Prompt"pip install --upgrade pip
pip install librosa resampy
pip install tensorflow
NOTE: You might need to run the command prompt as "administrator".
Type Win + S
, search for command prompt and then right-click, select "Run as administrator".
Install Visual Studio Code (optional)
Install BirdNET using Git (for simple download see below)
+git clone https://github.com/kahst/BirdNET-Analyzer.git+
git pull
for BirdNET-Analyzer folder occasionallyInstall BirdNET from zip
Run BirdNET from command line
Win + S
type "command" and click on "Command Prompt"NOTE: With Visual Studio Code installed, you can right-click the BirdNET-Analyzer folder and select "Open with Code". With proper extensions installed (View -> Extensions -> Python) you will be able to run all scripts from within VS Code.
=== Setup (macOS)
NOTE: Installation was only tested on M1 and M2 chips. Feedback on older Intel CPUs or newer M3 chips is welcome!
==== Requirements
You need to install the Xcode command-line tools:
Clone the git repository into your prefered folder if you have not done that yet:
==== Setup the environment We are going to create a virtual environment to install the required packages. Virtual environments allow you to manage separate package installations for different projects.
WARNING: :exclamation:Make sure that you are using Python 3.10, if not install it from the https://www.python.org/downloads/release/python-3100/[Python website].:exclamation:
The nexttime you want to use BirdNET, go to the BirdNET-Analyzer folder and run source venv-birdnet/bin/activate
to activate the virtual environment.
==== Install dependencies
TensorFlow for macOS and Metal plug-in:
Librosa and ffmpeg:
==== Verify
Run the example. It will take a while the first time you run it. Subsequent runs will be faster.
NOTE: Now, you can install and use <<_setup_python_package,birdnet>>.
== Usage === Usage (CLI)
analyzer.py
to analyze an audio file.
You need to set paths for the audio file and selection table output.
Here is an example:
+
[source,sh]+
NOTE: Your custom species list has to be named 'species_list.txt' and the folder containing the list needs to be specified with --slist /path/to/folder
.
You can also specify the number of CPU threads that should be used for the analysis with --threads <Integer>
(e.g., --threads 16
).
If you provide GPS coordinates with --lat
and --lon
, the custom species list argument will be ignored.
+
python3 -m birdnet_analyzer.analyze --i example/ --o example/ --slist example/ --min_conf 0.5 --threads 4
embeddings.py
to extract feature embeddings instead of class predictions.
Result file will contain timestamps and lists of float values representing the embedding for a particular 3-second segment.
Embeddings can be used for clustering or similarity analysis.
Here is an example:
+
[source,sh]segments.py
to extract short audio segments for species detections to verify results.
This way, it might be easier to review results instead of loading hundreds of result files manually.
+
Here's a complete list of all command line arguments:
+species_list.txt
file, make sure to copy species names from the labels file of each model.
+
You can find label files in the checkpoints folder, e.g., checkpoints/V2.3/BirdNET_GLOBAL_3K_V2.3_Labels.txt
.
+
Species names need to consist of scientific name_common name
to be valid.
+
. You can generate a species list for a given location using species.py
in case you need it for reference.
Here is an example:
+
[source,sh]+ The year-round list may contain some species, that are not included in any list for a specific week. See https://github.com/kahst/BirdNET-Analyzer/issues/211 for more details. . This is a very basic version of the analysis workflow, you might need to adjust it to your own needs. . Please open an issue to ask for new features or to document unexpected behavior. . I will keep models up to date and upload new checkpoints whenever there is an improvement in performance. I will also provide quantized and pruned model files for distribution.
=== Usage (Docker)
Install docker for Ubuntu:
Build Docker container:
NOTE: You need to run docker build again whenever you make changes to the script.
In order to pass a directory that contains your audio files to the docker file, you need to mount it inside the docker container with -v /my/path:/mount/path
before you can run the container.
You can run the container for the provided example soundscapes with:
You can adjust the directory that contains your recordings by providing an absolute path:
You can also mount more than one drive, e.g., if input and output folder should be different:
See <<usage-cli,Usage (CLI) section>> above for more command line arguments, all of them will work with Docker version.
NOTE: If you like to specify a species list (which will be used as post-filter and needs to be named 'species_list.txt'), you need to put it into a folder that also has to be mounted.
=== Usage (Server)
You can host your own analysis service and API by launching the server.py
script.
This will allow you to send files to this server, store submitted files, analyze them and send detection results back to a client.
This could be a local service, running on a desktop PC, or a remote server.
The API can be accessed locally or remotely through a browser or Python client (or any other client implementation).
pip3 install bottle
.
. Start the server with python3 -m birdnet_analyzer.server
.
You can also specify a host name or IP and port number, e.g., python3 -m birdnet_analayzer.server --host localhost --port 8080
.
+
Here's a complete list of all command line arguments:
+multipart/form-data
with the following fields: audio
for raw audio data as byte code, and meta
for additional information on the audio file.
Take a look at our example client implementation in the client.py
script.
+
This script will read an audio file, generate metadata from command line arguments and send it to the server.
The server will then analyze the audio file and send back the detection results which will be stored as a JSON file.
+
Here's a complete list of all command line arguments:
++
. Parse results from the server.
The server will send back a JSON response with the detection results.
The response also contains a msg
field, indicating success
or error
.
Results consist of a sorted list of (species, score) tuples.
+
This is an example response:
+
+ NOTE: Let us know if you have any questions, suggestions, or feature requests. Also let us know when hosting an analysis service - we would love to give it a try.
=== Usage (GUI)
We provide a very basic GUI which lets you launch the analysis through a web interface.
.Web based GUI image::https://tuc.cloud/index.php/s/QyBczrWXCrMoaRC/download/analyzer_gui.png[GUI screenshot]
. You need to install two additional packages in order to use the GUI with pip3 install pywebview gradio
. Launch the GUI with python3 -m birdnet_analyzer.gui
.
. Set all folders and parameters, after that, click 'Analyze'.
== Training
You can train your own custom classifier on top of BirdNET. This is useful if you want to detect species that are not included in the default species list. You can also use this to train a classifier for a specific location or season. All you need is a dataset of labeled audio files, organized in folders by species (we use folder names as labels). This also works for non-bird species, as long as you have a dataset of labeled audio files. Audio files will be resampled to 48 kHz and converted into 3-second segments (we will use the center 3-second segment if the file is longer, we will pad with random noise if the file is shorter). We recommend using at least 100 audio files per species (although training also works with less data). You can download a sample training data set https://drive.google.com/file/d/16hgka5aJ4U69ane9RQn_quVmgjVY2AY5[here].
<scientific name>_<species common name>
(e.g., Poecile atricapillus_Black-capped Chickadee
), but other formats work as well.
. It can be helpful to include a non-event class.
If you name a folder 'Noise', 'Background', 'Other' or 'Silence', it will be treated as a non-event class.
. Run the training script with python3 train.py --i <path to training data folder> --o <path to trained classifier model output>
.
+
Here is a list of all command line arguments:
+The script saves the trained classifier model based on the best validation loss achieved during training. This ensures that the model saved is optimized for performance according to the chosen metric.
After training, you can use the custom trained classifier with the --classifier
argument of the analyze.py
script. If you want to use the custom classifier in Raven, make sure to set --model_format raven
.
NOTE: Adjusting hyperparameters (e.g., number of hidden units, learning rate, etc.) can have a big impact on the performance of the classifier.
We recommend trying different hyperparameter settings. If you want to automate this process, you can use the --autotune
argument (in that case, make sure to install keras_tuner with pip3 install keras-tuner
).
Example usage (when downloading and unzipping the sample training data set):
NOTE: Setting a custom classifier will also set the new labels file. Due to these custom labels, the location filter and locale will be disabled.
You can include negative samples for classes by prefixing the folder names with a '-' (e.g., -Poecile atricapillus_Black-capped Chickadee
). Do this with samples that definitely do not contain the species. Negative samples will only be used for training and not for validation. Also keep in mind that negative samples will only be used when a corresponding folder with positive samples exists. Negative samples cannot be used for binary classification, instead include these samples in the non-event folder.
To train with multi-label data separate the class labels with commas in the folder names (e.g., Poecile atricapillus_Black-capped Chickadee, Cardinalis cardinalis_Northern Cardinal
). This can also be combined with negative samples as described above. The validation split will be performed combination of classes, so you might want to ensure sufficient data for each combination of classes. When using multi-label data the upsampling mode will be limited to 'repeat'.
== Segment review
Please read the excellent paper from Connor M. Wood and Stefan Kahl: https://scholar.google.com/citations?view_op=view_citation&hl=en&user=Uwta4wYAAAAJ&sortby=pubdate&citation_for_view=Uwta4wYAAAAJ:j3f4tGmQtD8C[Guidelines for appropriate use of BirdNET scores and other detector outputs].
The Review tab in the GUI is an implementation of the workflow described in the paper. It allows you to review the segments that were detected by BirdNET and to label the segments manually. This can helb you to choose an appropriate threshold for your specific use case.
General workflow:
. Use the Segments tab in the GUI or the segments.py
script to extract short audio segments for species detections.
. Open the Review tab in the GUI and select the parent directory containing the directories for all the species you want to review.
. Review the segments and manually check "positive" if the segment does contain target species or "negative" if it does not.
For each selected sample the logistic regression curve is fitted and the threshold is calculated. The threshold is the point where the logistic regression curve crosses the 0.5 line.
== GUI Language
The default language of the GUI is English, but you can change it to German, French, Chinese or Portuguese in the Settings tab of the GUI. If you want to contribute a translation to another language you, use the files inside the lang
folder as a template. You can then send us the translated files or create a pull request.
To check your translation, place your file inside the lang
folder and start the GUI, your language should now be available in the Settings tab. After selecting your language, you should restart the GUI to apply the changes.
We thank our collaborators for contributing translations:
Chinese: Sunny Tseng (https://github.com/SunnyTseng[@Sunny Tseng])
French: https://github.com/FranciumSoftware[@FranciumSoftware]
Portuguese: Larissa Sugai (https://github.com/LSMSugai[@LSMSugai])
Russian: Александр Цветков (cau@yandex.ru, radio call sign: R1BAF)
== Funding
This project is supported by Jake Holshuh (Cornell class of `'69) and The Arthur Vining Davis Foundations. Our work in the K. Lisa Yang Center for Conservation Bioacoustics is made possible by the generosity of K. Lisa Yang to advance innovative conservation technologies to inspire and inform the conservation of wildlife and habitats.
The German Federal Ministry of Education and Research is funding the development of BirdNET through the project "BirdNET+" (FKZ 01|S22072). Additionally, the German Federal Ministry of Environment, Nature Conservation and Nuclear Safety is funding the development of BirdNET through the project "DeepBirdDetect" (FKZ 67KI31040E).
== Partners
BirdNET is a joint effort of partners from academia and industry. Without these partnerships, this project would not have been possible. Thank you!
.Our partners image::https://tuc.cloud/index.php/s/KSdWfX5CnSRpRgQ/download/box_logos.png[Logos of all partners]