Closed avmodi closed 1 year ago
There is a minor code bug in attention_on_cat_and_numerical_feats combine method.
if numerical_feats.shape[1] != 0: if self.numerical_feat_dim > self.text_out_dim: numerical_feats = self.num_mlp(numerical_feats) w_num = torch.mm(numerical_feats, self.weight_num) g_num = (torch.cat([w_text, **w_cat**], dim=-1) * self.weight_a).sum(dim=1).unsqueeze(0).T
Closing as this is a duplicate of #9.
There is a minor code bug in attention_on_cat_and_numerical_feats combine method.
_w_cat should be wnum