AlbusPeter / VEATIC

[WACV 2024] Code release for "VEATIC: Video-based Emotion and Affect Tracking in Context Dataset"
https://veatic.github.io/
10 stars 2 forks source link

About Experiment Details #1

Open cenjinglun opened 8 months ago

cenjinglun commented 8 months ago

I found the experiments in the paper include three types: fully-informed, character-only, and context-only. However, from the code, it seems that only the first type can be observed. Could you show mor about how these three types of experiments are conducted and how the division into character-only and context-only is achieved?

AlbusPeter commented 8 months ago

We adopted an instance segmentation model to extract the main character masks. Then character-only frames will only have the character area clear with other contexts blurred out and vice versa for the context-only frames.

cenjinglun commented 8 months ago

Do you have plans to update the complete implementation code for this part?

AlbusPeter commented 8 months ago

The mask generation is not the main part of the dataset. We ran the instance segmentation model and asked some research assistants to retouch the masks. If you want to use the masks, just send me an email and we are happy to share those with you!

arijit-byte commented 1 month ago

The mask generation is not the main part of the dataset. We ran the instance segmentation model and asked some research assistants to retouch the masks. If you want to use the masks, just send me an email and we are happy to share those with you!

@AlbusPeter , I have mailed you, kindly send me.