Open battermann opened 5 years ago
It seems like one possibility is to use all the voices to generate another datatype that's like a Voice, that just contains a list of changes to the accidentals that are implied at any given point in a staff. We could call those, for example, BarAccidentals
, which might include a Duration
. At the beginning of a bar, there would be a BarAccidentals
containing the notes in the key signature:
C, D, E, F#, G, A, B
, Duration Zero
Then after the first note happens (the C# in voice 1), there would be a new BarAccidentals
with these notes:
C#, D, E, F#, G, A, B
, Duration Quarter
We'd make new BarAccidentals
every time an accidental was changed. Once the BarAccidentals
data was generated, then it is trivial for the ABC converter to check whether a Note in a Voice had the matching accidental in the BarAccidentals
when it's generating output.
I only added a test that shows what is to be implemented.
The problem is not trivial at all and I currently don't have a strategy for an elegant solution.
What makes is difficult is that a staff can contain multiple voices, so it is not possible/easy to thread through a state while rendering each voice. Because the current state can affect notes from previous voices that have already been rendered.
Here is an example, which is the desired result:
Currently the result would be this:
Here are the ABC codes:
Expected result:
Actual result:
You can try it in this ABC editor.
It is also the question if this should be done during the ABC rendering. Or does it make more sense to have a processing step before that, where notes are assigned with an additional attribute that indicates wether the accidental is shown or not:
The last resort would be to constrain a staff to one voice???
Or to go for a completely different
Notation
model.Depends on #46 so this is set as the target branch ATM to have a nicer diff.