cuthbertLab / music21

music21 is a Toolkit for Computational Musicology
https://www.music21.org/
Other
2.11k stars 398 forks source link

[tracking] Improvements to voice numbering #890

Open jacobtylerwalls opened 3 years ago

jacobtylerwalls commented 3 years ago

This is a great idea -- let me think about whether we should explicitly define voice.number separately from ID -- the Q is whether it's universal across the score, within a PartStaff or across parts. having it across PartStaffs would help our friend trying to track the tenor line -- IF it was set that way in musicxml. But what happens if it ends and then begins again?

Originally posted by @mscuthbert in https://github.com/cuthbertLab/music21/issues/886#issuecomment-791203503


My reply to that is that the expected stream nesting is voice inside a measure, so the numbers parsed from MusicXML or defaulted by makeVoices/makeNotation will be arbitrary, not analytical, and will vary from measure to measure; and if a user wants analytical voice numbers that wouldn't necessarily export to MusicXML to track analytical voices through an entire score, that's a use case for a separate attribute, which we wouldn't be setting through any of the above methods -- or we could document in the user's guide using a Group to accomplish this.

Very easy to just have makeVoices do this, once we decide whether it should be an int or str:

diff --git a/music21/stream/base.py b/music21/stream/base.py
index 83a0a9be0..90d383586 100644
--- a/music21/stream/base.py
+++ b/music21/stream/base.py
@@ -10457,8 +10457,11 @@ class Stream(core.StreamCoreMixin, base.Music21Object):
             # remove from source
             returnObj.remove(e)
         # remove any unused voices (possible if overlap group has sus)
+        voiceId: int = 0
         for v in voices:
             if v:  # skip empty voices
+                voiceId += 1
+                v.id = voiceId
                 if fillGaps:
                     returnObj.makeRests(fillGaps=True, inPlace=True)
                 returnObj.insert(0, v)

However, there could be a usability issue if, say, someone wanted to walk a stream of measures to analyze and set the analytical voice number with a Group -- there is some overhead if "id" may or may not be an int or str, etc., that's a lot of try/except/cast to just get the numbers out. Also -- we encourage users to overwrite .id with whatever they please to extract specific objects, in which case they'll lose the numbering.

In fact... that leads me to say that there should be a new .number for the arbitrary numbers created by makeVoices/musicXML, and let users use .id or .editorial or Group or whatever to set analytical voice numbers if they wish.

jacobtylerwalls commented 3 years ago

Saw a note in the XML writer that MusicXML 3.1 defines an "id" attribute for unique elements. We probably do need .number so that we don't prevent users from setting that.

jacobtylerwalls commented 3 years ago
jacobtylerwalls commented 3 years ago

status quo (setting .id) when creating voices in makeMeasures: 0-indexed; in musicxml parser, whatever the xml document says (starts at 1, all else being equal); makeVoices: doesn't set id; existing humdrum parser: uses Groups beginning "voice1" etc

jacobtylerwalls commented 3 years ago

There is some discussion here about visual (no-overlap) voices vs. analytical voices https://github.com/w3c/mnx/issues/183

Michael Good uses the term "sequence", which I like, and mentions that may be the direction MNX goes.

in m21, .id is probably better kept as the "analytical" voice number, because existing functionality in voicesToParts(separateById=True) depends on how that's configured, and users can tag their objects this way to create associations, but m21 <--> I/O formats would perform better with a "sequence" number. We wouldn't want <voice>tenor</voice> written out; we would want to write out the sequence number, and we would want the sequence to stay updated.

jacobtylerwalls commented 3 years ago

The lack of standardized voice numbering caused the following malformed stream representation (leading to dropped data in Finale as well as crashing MuseScore):

MIDI file (available on request) Stream representation in 1f0a5cd06e85739c057ba764eec2e2d4b9778644: Notice overfilled measure 89

{0.0 - 0.0} <music21.instrument.ElectricOrgan 'ch5 Rock Organ: Electric Organ'>
{0.0 - 0.0} <music21.clef.TrebleClef>
{0.0 - 0.0} <music21.key.Key of C major>
{0.0 - 0.0} <music21.meter.TimeSignature 4/4>
{0.0 - 4.5} <music21.stream.Measure 89 offset=0.0>
    {0.0 - 4.5} <music21.stream.Voice 0x108ad9610>
        {0.0 - 3.25} <music21.chord.Chord G5 B-4>
        {3.25 - 3.5} <music21.note.Rest rest>
        {3.5 - 4.5} <music21.note.Note G>
    {0.0 - 4.5} <music21.stream.Voice 0x108ad9fd0>
        {0.0 - 4.5} <music21.note.Note E>
{4.0 - 8.0} <music21.stream.Measure 90 offset=4.0>
    {0.0 - 4.0} <music21.stream.Voice 0x108ad9e20>
        {0.0 - 0.6667} <music21.note.Rest rest>
        {0.6667 - 0.9167} <music21.chord.Chord B-4 E5 G5>
        {0.9167 - 2.0} <music21.note.Rest rest>
        {2.0 - 2.3333} <music21.chord.Chord E5 B-4>
        {2.3333 - 2.5} <music21.note.Rest rest>
        {2.5 - 2.75} <music21.chord.Chord B-4 E5>
        {2.75 - 4.0} <music21.note.Rest rest>
    {0.0 - 4.0} <music21.stream.Voice 0x108ad9250>
        {0.0 - 2.0} <music21.note.Rest rest>
        {2.0 - 2.75} <music21.note.Note G>
        {2.75 - 4.0} <music21.note.Rest rest>

After improvement in f8675338b0eae0900eb0e0e7a231782191df74a5 to call makeTies(): Notice "loose" notes outside of measure 90

{0.0 - 0.0} <music21.instrument.ElectricOrgan 'ch5 Rock Organ: Electric Organ'>
{0.0 - 0.0} <music21.clef.TrebleClef>
{0.0 - 0.0} <music21.key.Key of C major>
{0.0 - 0.0} <music21.meter.TimeSignature 4/4>
{0.0 - 4.0} <music21.stream.Measure 89 offset=0.0>
    {0.0 - 4.0} <music21.stream.Voice 0x108517640>
        {0.0 - 3.25} <music21.chord.Chord G5 B-4>
        {3.25 - 3.5} <music21.note.Rest rest>
        {3.5 - 4.0} <music21.note.Note G>
    {0.0 - 4.0} <music21.stream.Voice 0x108517700>
        {0.0 - 4.0} <music21.note.Note E>
{4.0 - 8.0} <music21.stream.Measure 90 offset=4.0>
    {0.0 - 4.0} <music21.stream.Voice 0x108517970>
        {0.0 - 0.6667} <music21.note.Rest rest>
        {0.6667 - 0.9167} <music21.chord.Chord B-4 E5 G5>
        {0.9167 - 2.0} <music21.note.Rest rest>
        {2.0 - 2.3333} <music21.chord.Chord E5 B-4>
        {2.3333 - 2.5} <music21.note.Rest rest>
        {2.5 - 2.75} <music21.chord.Chord B-4 E5>
        {2.75 - 4.0} <music21.note.Rest rest>
    {0.0 - 4.0} <music21.stream.Voice 0x108517790>
        {0.0 - 2.0} <music21.note.Rest rest>
        {2.0 - 2.75} <music21.note.Note G>
        {2.75 - 4.0} <music21.note.Rest rest>
    {0.0 - 0.5} <music21.note.Note G>
    {0.0 - 0.5} <music21.note.Note E>
    {2.75 - 4.0} <music21.note.Rest rest>

As mentioned above, the "loose" notes crash MuseScore. The problem goes away with #915 because we have standard voice numbering instead of random IDs, and makeTies() can bridge voices that match:

>>> inn.parts[-1].measure(89).voices[0].id
4434523712
jacobtylerwalls commented 3 years ago

(The trailing loose rest has already been fixed during v7 -- it's the leading loose notes that we would want makeTies to handle, and it does handle once the voices are numbered to match)

jacobtylerwalls commented 2 years ago

Myke's right, sequence would be a poor name for an attribute in a music library with a lot of theory functions. Maybe VoiceSpanner is the only truly new thing we need to track cross-staff beams, and the rest amounts to consistency cleanups across all the parsers and processors.

jacobtylerwalls commented 2 years ago

Right, okay, it's coming back to me how this is relevant to more than just cross-staff beams. A second use case for VoiceSpanner could be to define which voices should be visited in what order by other library or user functions. Right now recurse() only recurses into measures first, and then voices of that measure. This is why #1134 is still unsolved.

With some effort a user can write a function to pre-walk substreams, get the voice IDs in the order encountered, etc., and be conscious of that when then walking the substreams again, but that's burdensome.

With VoiceSpanners (which can be ordered like any m21object), we could have a recurse-like function that says, just visit all of those voices, in the order, then go to the next spanner. Then get all the voices unspanned. Then get all the loose notes.