CPJKU / partitura

A python package for handling modern staff notation of music
https://partitura.readthedocs.io
Apache License 2.0
245 stars 18 forks source link

Tempo curves example from doc broken #378

Open leleogere opened 1 month ago

leleogere commented 1 month ago

Hello,

I tried the example to generate tempo curves of two performances provided at https://partitura.readthedocs.io/en/latest/Tutorial/notebook.html#Comparing-tempo-curves

However, it does not seem to work anymore. First, it seems that the get_time_maps_from_alignment function has be moved from pt.utils.music to pt.musicanalysis.performance_codec, and even when setting the correct path, I get errors about indexing:

Traceback (most recent call last):
  File "/home/user/.config/JetBrains/PyCharm2024.2/scratches/scratch_12.py", line 40, in <module>
    _, stime_to_ptime_map = pt.musicanalysis.performance_codec.get_time_maps_from_alignment(ppart, part, alignment)
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/miniconda3/envs/gramscore/lib/python3.12/site-packages/partitura/musicanalysis/performance_codec.py", line 740, in get_time_maps_from_alignment
    score_onsets = score_note_array[match_idx[:, 0]]["onset_beat"]
                                    ~~~~~~~~~^^^^^^
IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed

Here is a sample to reproduce the issue:

from pathlib import Path

import partitura as pt

import numpy as np

DIRECTORY = Path("/home/user/nasap-dataset/Bach/Prelude/bwv_893")
SCORE_PATH = DIRECTORY / "xml_score.musicxml"

score = pt.load_musicxml(SCORE_PATH)
part = score.parts[0]

snote_array = part.note_array()
print(snote_array)
print(snote_array.dtype)
print(snote_array[0])

# get all match files
matchfiles = list(DIRECTORY.glob("*.match"))
matchfiles.sort()

# Score time from the first to the last onset
score_time = np.linspace(
    snote_array['onset_beat'].min(),
    snote_array['onset_beat'].max(),
    100
)
# Include the last offset
score_time_ending = np.r_[
    score_time,
    (snote_array['onset_beat'] + snote_array['duration_beat']).max() # last offset
]

tempo_curves = np.zeros((len(matchfiles), len(score_time)))
for i, matchfile in enumerate(matchfiles):
    # load alignment
    performance, alignment = pt.load_match(matchfile)
    ppart = performance[0]
    # Get score time to performance time map
    _, stime_to_ptime_map = pt.musicanalysis.performance_codec.get_time_maps_from_alignment(ppart, part, alignment)
    # Compute naïve tempo curve
    performance_time = stime_to_ptime_map(score_time_ending)
    tempo_curves[i, :] = 60 * np.diff(score_time_ending) / np.diff(performance_time)

Python: 3.12 Partitura: 1.5.0

sildater commented 1 month ago

Hello! Thank you for raising this issue! I'll fix the import in the tutorial. As for the second error: you get this warning because the note IDs in the score note array and in the alignment do not correspond. If you print them, you see that the alignment contains note IDs ending with "-1" which indicates the number of repeats. The reason is that for the match files of (n)ASAP all score notes have a suffix, because the original IDs come from musicxml scores, but in the performance (and the corresponding match files) might contain repeats, jumps, etc. The fix is very short, just unfold (n)ASAP scores to the maximal length like so score_part = pt.score.unfold_part_maximal(score_part). Just do this before you extract the note array, then it should be fine. In some cases, you also need to merge the parts before, but not in your example score_part = pt.score.unfold_part_maximal(pt.score.merge_parts(score.parts)). I hope that helps!

leleogere commented 1 month ago

Works like a charm! Thank you!

leleogere commented 1 month ago

I just realized that I missed the issue #348, already reporting this problem, and even more than that: it was already fixed in #368 by yourself, just not merged yet. Sorry for the duplicate report!

sildater commented 1 month ago

good catch! sorry for the slow merge/release process!