Closed ttgeng233 closed 2 years ago
Our code allows a single moment to be matched to multiple overlapping GT segments. L517-526 in the same file handles this.
Hi, I saw your issues #43 before. My answer currently remains the same. The code you mentioned is that each feature point will only have one regression target. And ActionFormer will perform class-agnostic boundary regression.
For EPIC-Kitchens 100, the action is composed by a verb plus a noun. Though it will have some overlap between different actions, since our center sampling strategy, we can still achieve good performance on EPIC-Kitchens dataset since most feature point (which represents the center area of an action) will only have one class.
If you want to tweak ActionFormer for multiclass datasets, you may need to perform class-aware regression, i.e., in the code you mentioned, you keep the regression length for multiple length, and perform per-class boundary regression.
OK. I see the confusion here. In short, I don't think there is a need for class-aware regression. An important detail is that when decoding the action instances, our code considers every category in each feature slot as a candidate.
To further clarify, let us consider the following cases.
For case 3, a major failure mode will lies in the sub-action action pattern, i.e., one sub-action is localized in the center of another action.
For example, one action is composed by three subactions, and the second subaction is localized at the center of the action. In that case, ActionFormer will fail. These cases can be found in complex datasets like FineGym.
In the example you mentioned, the second subaction, co-centered with the action, is likely distributed to a different level on FPN. A corner case will be that two GT segments are co-centered on the same FPN level. But again, predicting any of the two boundaries will lead to a pretty high tIoU (>0.5) for the other, due to the design of the FPN.
closed due to inactivity.
Thank you for your good project! in lines 510-515 at ./libs/modeling/meta_archs.py, I can't understand the code but it seems to keep only one action for one moment in gt_reg, but in experiments, I found that EPIC-kitchens-100 dataset has many videos that have more than one action at the same time, as shown in its original paper: So, if there are some errors in the code, how to modify it to allow multiple actions per timestep?