Closed thechargedneutron closed 3 years ago
For table 1 in the paper, we manually mark the alignment between extracted verb phrases in narrations and the ground truth key-steps. We don't have a code to achieve that. We only manually mark the key-steps in narrations for a few videos.
We tried to compare the similarity between semantic embeddings of narrations and key-steps to localize the key-steps, which may give some meaningful alignments, but it seems that the alignment is not perfect and difficult to quantify the quality.
Makes sense. Just to confirm my understanding, do you manually mark the key-steps in narrations that is shown in Figure 1 as well?
Yes, you are correct.
Thanks!
Thanks for the good work. I have a simple question regarding data processing. For the video clips, the CrossTask dataset has annotations. For example, the annotations for
113766_JFnZHAOUClw.csv
is as follows:which means
season steak
happens (visually) from 40.51 to 44.21 and so on.But for the narrations, how are you mapping key-steps to narrations? In table 1 in the paper, you have shown the mapping in bold, but I could not find a reference of how to achieve that in the code? Can you please point me to that? I need ground truth narrations mapped to the key-steps for my research purpose.