declare-lab / SANCL

[COLING 2022] This repository contains the code of the paper SANCL: Multimodal Review Helpfulness Prediction with Selective Attention and Natural Contrastive Learning.
MIT License
7 stars 0 forks source link

Evaluation metrics question #5

Open tiebreaker4869 opened 6 days ago

tiebreaker4869 commented 6 days ago

Hi, great to read your work. I wonder how relevance is determined in the evaluation metrics such as MAP in this task? The label is a integer from 0 - 4 instead of a binary value denoting the relevance.