thuiar / MMSA

MMSA is a unified framework for Multimodal Sentiment Analysis.
MIT License
642 stars 104 forks source link

More intuitive testing #21

Closed leijue222 closed 3 years ago

leijue222 commented 3 years ago
  1. The run.py only can see the scores in the *.log file. Does it support tests emotion on video which input by users?

Oh, I see~. All the data had been processed to *.pkl file to train. So if I want test on personal input video, I should process the input video to pkl file which contain text, audio and video.

  1. By the way. What is the performance of this method? Can it be detected emotion on video real-time?.
leijue222 commented 3 years ago

What configuration needs to be modified to change the processing one by one? I modified here and here ,Adjust the batch to 1, but still report the error dimension mismatch

leijue222 commented 3 years ago

The problem is here: https://github.com/thuiar/MMSA/blob/b2e70bbd198ba8e8dc041f5e059c3baa2027b34a/models/multiTask/SELF_MM.py#L138 I have to do this:

h = self.dropout(final_states[0].squeeze())
if len(h.shape) == 1:
    h = h.unsqueeze(0)

Otherwise, the batch is 1 cannot be processed.

Columbine21 commented 3 years ago

Hi, to test the model performance of real-time video do not need to change the config file actually. You need to load the model and preprocess the video data as the same with train / test data.

leijue222 commented 3 years ago

Hi, to test the model performance of real-time video do not need to change the config file actually. You need to load the model and preprocess the video data as the same with train / test data.

@Columbine21 Thanks! In fact, I'm trying to do it. But I have a problem when dealing with data and I don't know how to solve it. See issue#22 for details.