Let's try to anticipate all of the problems that may arise from using multi-channel recordings. (post your suggestions below!)
annotations may be extracted from a subset of the channels. It is important to keep track of which channels were used to generate some annotation. Processing algorithms might take 1 to several channels as an input, using a number of selection strategies : random selection, louder channel, etc. Humans might use different stereo mappings (it is still not obvious to me how n > 2 channels are mapped to FR/FL and that there only exists one indisputable way...). In the most general case, where n channels may be linearly combined and weighted into m channels, the mapping function has the form
which may be represented by a mxn matrix.
If we assume that channels can only be turned on/off this form reduces to
which can be stored in a list of m lists :
Is metadata/annotations.csv the right place to store this information ? Or should it be stored in some parameters.csv file, at the root of the annotation set ?
audio conversion should be able to perform any desired audio mapping
Let's try to anticipate all of the problems that may arise from using multi-channel recordings. (post your suggestions below!)
which may be represented by a
m
xn
matrix.If we assume that channels can only be turned on/off this form reduces to which can be stored in a list of
m
lists :Is
metadata/annotations.csv
the right place to store this information ? Or should it be stored in some parameters.csv file, at the root of the annotation set ?