sharathadavanne / seld-dcase2022

Baseline method for sound event localization task of DCASE 2022 challenge
52 stars 22 forks source link

DCASE 2022: Sound Event Localization and Detection Evaluated in Real Spatial Sound Scenes

Please visit the official webpage of the DCASE 2022 Challenge for details missing in this repo.

As the baseline method for the SELD task, we use the SELDnet method studied in the following papers, with Multiple Activity-Coupled Cartesian Direction of Arrival (Multi-ACCDOA) representation as the output format. Specifically for the microphone version of the dataset, we have added support of the SALSA-lite features. If you are using this baseline method or the datasets in any format, then please consider citing the following papers. If you want to read more about generic approaches to SELD then check here.

  1. Sharath Adavanne, Archontis Politis, Joonas Nikunen and Tuomas Virtanen, "Sound event localization and detection of overlapping sources using convolutional recurrent neural network" in IEEE Journal of Selected Topics in Signal Processing (JSTSP 2018)
  2. Kazuki Shimada, Yuichiro Koyama, Shusuke Takahashi, Naoya Takahashi, Emiru Tsunoo, and Yuki Mitsufuji, " Multi-ACCDOA: localizing and detecting overlapping sounds from the same class with auxiliary duplicating permutation invariant training" in the The international Conference on Acoustics, Speech, & Signal Processing (ICASSP 2022)
  3. Thi Ngoc Tho Nguyen, Douglas L. Jones, Karn N. Watcharasupat, Huy Phan, and Woon-Seng Gan, "SALSA-Lite: A fast and effective feature for polyphonic sound event localization and detection with microphone arrays" in the International Conference on Acoustics, Speech, & Signal Processing (ICASSP 2022)

BASELINE METHOD

In comparison to the SELDnet studied in [1], we have changed the output format to Multi-ACCDOA [2] to support detection of multiple instances of the same class overlapping. Additionally, we use SALSA-lite [3] features for the microphone version of the dataset, this is to overcome the poor performance of GCC features in the presence of multiple overlapping sound events.

The final SELDnet architecture is as shown below. The input is the multichannel audio, from which the different acoustic features are extracted based on the input format of the audio. Based on the chosen dataset (FOA or MIC), the baseline method takes a sequence of consecutive feature-frames and predicts all the active sound event classes for each of the input frame along with their respective spatial location, producing the temporal activity and DOA trajectory for each sound event class. In particular, a convolutional recurrent neural network (CRNN) is used to map the frame sequence to a Multi-ACCDOA sequence output which encodes both sound event detection (SED) and direction of arrival (DOA) estimates in the continuous 3D space as a multi-output regression task. Each sound event class in the Multi-ACCDOA output is represented by three regressors that estimate the Cartesian coordinates x, y and z axes of the DOA around the microphone. If the vector length represented by x, y and z coordinates are greater than 0.5, the sound event is considered to be active, and the corresponding x, y, and z values are considered as its predicted DOA.

The figure below visualizes the SELDnet input and outputs for one of the recordings in the dataset. The horizontal-axis of all sub-plots for a given dataset represents the same time frames, the vertical-axis for spectrogram sub-plot represents the frequency bins, vertical-axis for SED reference and prediction sub-plots represents the unique sound event class identifier, and for the DOA reference and prediction sub-plots, it represents the distances along the Cartesian axes. The figures represents each sound event class and its associated DOA outputs with a unique color. Similar plot can be visualized on your results using the provided script.

DATASETS

Similar to previous editions of the challenge, the participants can choose either or both the versions or the datasets,

These datasets contain recordings from an identical scene, with Ambisonic version providing four-channel First-Order Ambisonic (FOA) recordings while Microphone Array version provides four-channel directional microphone recordings from a tetrahedral array configuration. Both the datasets, consists of a development and evaluation set. All participants are expected to use the fixed splits provided in the baseline method for reporting the development scores. The evaluation set will be released at a later point.

More details on the recording procedure and dataset can be read on the DCASE 2021 task webpage.

The development dataset can be downloaded from the link - Sony-TAu Realistic Spatial Soundscapes 2022 (STARSS22) DOI

Getting Started

This repository consists of multiple Python scripts forming one big architecture used to train the SELDnet.

Additionally, we also provide supporting scripts that help analyse the results.

Prerequisites

The provided codebase has been tested on python 3.8.11 and torch 1.10.0

Training the SELDnet

In order to quickly train SELDnet follow the steps below.

python3 batch_feature_extraction.py
python3 batch_feature_extraction.py 3
python3 batch_feature_extraction.py 7

Results on development dataset

As the SELD evaluation metric we employ the joint localization and detection metrics proposed in [1], with extensions from [2] to support multi-instance scoring of the same class.

  1. Annamaria Mesaros, Sharath Adavanne, Archontis Politis, Toni Heittola, and Tuomas Virtanen, "Joint Measurement of Localization and Detection of Sound Events", IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA 2019)

  2. Archontis Politis, Annamaria Mesaros, Sharath Adavanne, Toni Heittola, and Tuomas Virtanen, "Overview and Evaluation of Sound Event Localization and Detection in DCASE 2019", IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP 2020)

There are in total four metrics that we employ in this challenge. The first two metrics are more focused on the detection part, also referred as the location-aware detection, corresponding to the error rate (ER20°) and F-score (F20°) in one-second non-overlapping segments. We consider the prediction to be correct if the prediction and reference class are the same, and the distance between them is below 20°. The next two metrics are more focused on the localization part, also referred as the class-aware localization, corresponding to the localization error (LECD) in degrees, and a localization Recall (LRCD) in one-second non-overlapping segments, where the subscript refers to classification-dependent. Unlike the location-aware detection, we do not use any distance threshold, but estimate the distance between the correct prediction and reference.

The key difference in metrics with previous editions of the challenge is that this year we use the macro mode of computation. We first compute the above four metrics for each of the sound class, and then average them to get the final system performance.

The evaluation metric scores for the test split of the development dataset is given below.

Dataset ER20° F20° LECD LRCD
Ambisonic (FOA + Multi-ACCDOA) 0.71 21.0 % 29.3° 46.0 %
Microphone Array (MIC-GCC + Multi-ACCDOA) 0.71 18.0 % 32.2° 47.0 %

Note: The reported baseline system performance is not exactly reproducible due to varying setups. However, you should be able to obtain very similar results.

Submission

For more information on the submission file formats check the website

License

This repo and its contents have the MIT License.