Closed qdinfish closed 5 years ago
Hi,
Thanks for your quick response to help resolve my issue/question.
I still have a question about the code mapping with the method in your paper.
For B1-B4 and pCTDM, how to map to do_attention ,do_one_to_all ,do_early_pooling and interaction ?
class pCTDM(nn.Module): def init(self, model_confs): super(pCTDM, self).init() self.input_size = 7096 self.hidden_size = 1000 self.num_players = model_confs.num_players self.num_classes = model_confs.num_classes self.do_attention = False self.do_one_to_all = False self.do_early_pooling = True self.interaction = True [do_attention, do_one_to_all, do_early_pooling, interaction] B1: F, F, F, F; the inputs are CNN features and must not be ranked! B2: F, F, F, F; the inputs are CNN+LSTM features and must not be ranked! B3: T, T, T, T; the inputs are CNN+LSTM features and must not be ranked! B4: F, F, T, T; the inputs are CNN+LSTM features and must be ranked!
The baselines do not mean much, but the final model should obtain a satisfactory result as reported in the paper.
Which setting shall be the final model in your paper? I saw the default setting in your code it's F,F,T,T.
Which setting shall be the final model in your paper? I saw the default setting in your code it's F,F,T,T.
Sorry for it, the setting of the final model is [T, T, T, T].
Hi,
Thanks for your quick response to help resolve my issue/question.
I still have a question about the code mapping with the method in your paper.
For B1-B4 and pCTDM, how to map to do_attention ,do_one_to_all ,do_early_pooling and interaction ?
class pCTDM(nn.Module): def init(self, model_confs): super(pCTDM, self).init() self.input_size = 7096 self.hidden_size = 1000 self.num_players = model_confs.num_players self.num_classes = model_confs.num_classes self.do_attention = False self.do_one_to_all = False self.do_early_pooling = True self.interaction = True