Closed sqiangcao99 closed 2 years ago
We used the "clip_rgb" ones. You can also directly use the feature from TeSTra.
@xumingze0308. Hi, thanks for your help. Currently, I have some proplems reproducing the resulting the results(6% lower) on TVSeries using the ActivityNet pretrained features. Specifically,
The rgb frames and flow frams are preprocessed according to the config file of mmaction2, which is
# RGB
data_pipeline = [
dict(type='RawFrameDecode'),
dict(type='CenterCrop', crop_size=256),
dict(type='Normalize', **args.img_norm_cfg),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs']),
]
data_pipeline = [
dict(type='RawFrameDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='TenCrop', crop_size=224),
dict(type='Normalize', **args.img_norm_cfg),
dict(type='FormatShape', input_format='NCHW_Flow'),
dict(type='Collect', keys=['imgs'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]
Is the whole process correct?
For both rgb and flow, we don't conduct CenterCrop, but directly use the default resolution.
Hi,
For ActivityNet pre-trained model, which two configuration files are used for rgb and flow?