knowledgetechnologyuhh / OMGEmotionChallenge

Repository for th OMG Emotion Challenge
Apache License 2.0
87 stars 35 forks source link

A question about caculateEvaluationCCC.py #5

Open wtomin opened 6 years ago

wtomin commented 6 years ago

I looked at the caculateEvaluationCCC.py and I found something confusing. Previously, the CCC was calculated for each video and the mean of CCCs can evaluate model performance, as denoted by following codes.


    cccArousal = numpy.array(cccArousal)
    cccValence = numpy.array(cccValence)
    print ("CCC Arousals:", cccArousal)
    print ("CCC Valences:", cccValence)

    print ("Mean CCC Arousal:", cccArousal.mean())
    print ("Mean CCC Valence:", cccValence.mean())

Now, the CCC seems to be calculated using all the utterances from validation set, without considering their corresponding videos.

    dataYArousal = dataY["arousal"]
    dataYValence = dataY["valence"]
    dataYPredArousal = dataYPred["arousal"]
    dataYPredValence = dataYPred["valence"]

    arousalCCC, acor = ccc(dataYArousal, dataYPredArousal)
    arousalmse = mse(dataYArousal, dataYPredArousal)
    valenceCCC, vcor = ccc(dataYValence, dataYPredValence)
    valencemse = mse(dataYValence, dataYPredValence)

It's a little strange. Shouldn't CCC be calculated for each video and then be averaged over the validation set? And which method do you use in your baseline model evaluation?

I'll be grateful to your reply.