This repository contains Jupyter notebooks developed by the SoundCoop project team for the passive acoustic community.
The SoundCoop Project was a three-year effort funded by NOAA Integrated Ocean Observing System, Bureau for Ocean Energy Management, U.S. Navy Living Marine Resources, and the Office of Naval Research. The goal of the project was to develop technology in collaboration with the passive acoustic monitoring (PAM) community to enable scalable processing of comparable sound level metrics and to provide open access to centralized data for science and management applications.
U.S. and international scientists contributed PAM data spanning 12 separate long-term monitoring projects. Datasets from ten of these projects were used to calculate a specific sound level metric, hybrid millidecade (HMD) spectra, across a diversity of labs and instruments.
The collaborative effort of the SoundCoop led to several advances in data management, processing, dissemination, visualization, and knowledge sharing.
Jupyter notebooks were created to demonstrate how to access data from different open access cloud buckets (Step 0), process raw audio data collected from two different recording instruments into HMD spectra using PBP and output the result in the SoundCoop netCDF standard (Step 1), read, visualize and analyze HMD netCDFs using PyPAM (Step 2), and integrate and visualize the HMD netCDFs with enviornmental data just like the visualizations available in the SoundCoop portal (Step 3).
In 0_download_data/
In 1_process_to_HMD_pbp/
In MARS
gen_HMD_MARS_icListen.ipynb: Access data recorded from the Monterey Bay Aquarium Research Institute (MBARI) Monterey Accelerated Research System (MARS) undersea cabled observatory from the Amazon Web Services (AWS) Registry of Open Data, read in processing metadata, create calibrated one-minute HMD spectra for one day and output the results as as a netCDF.
In MB05
gen_HMD_MB05_SoundTrap.ipynb: Access data recorded in the Monterey Bay National Marine Sanctuary from the Amazon Web Services (AWS) Registry of Open Data, read in processing metadata, create calibrated one-minute HMD spectra for one day and output the results as as a netCDF.
In NRS11
gen_HMD_NRS11_Haruphone.ipynb: Access data recorded from the NOAA-National Park Service Ocean Noise Reference Station Network (NRS) from the NCEI Passive Acoustic Data Archive Google Cloud Platform (GCP) bucket available through the NOAA NODD Program, read in processing metadata, create calibrated one-minute HMD spectra for one day and output the results as as a netCDF.
Example summary plot of HMD from NRS11 (click on the image for a larger view):
In 2_analysis_of_HMD_pypam/
In 3_HMD_environmental_data/
The notebooks were built so they could run in free environments such as MyBinder, Google Colab, and JupyterLab, meaning they aimed to have low requirements for RAM an CPU. The tradeoff is a reduced amount of data to view and process. These notebooks are for demonstration purposes and can serve as the foundation for large scale processing that is typical of passive acoustic data analyses.
Binder is a online service to build and share reproducible and interactive computational environments from public Github repositories. It uses Kubernetes and JupyterHub for the deployment process.
environment.yml
To run this repository's notebooks on MyBinder, please follow:
Colab is a online hosting service to run Jupyter Notebooks using free computing resources
To run any of this repository's notebooks (separately) on Google Colab, please follow:
JupyterLab is a free browser-based software for interactive development and computing environment for notebooks
To clone and run the notebooks of this repository on jupyterlab, follow: go to Git > Clone a Repository and add this repository url: https://github.com/ioos/soundcoop (or clone via terminal) The notebook will be downloaded for you on this space, and you are ready to go! Make sure you either first install all the dependencies from the repository, by either running the install cells of each notebook or by initially install all the required dependencies. This second option can be done with
poetry install
or
pip install -r requirements.txt