HestiaSky / IMF-Pytorch

MIT License
34 stars 5 forks source link

code #14

Open sleepyzh opened 9 months ago

sleepyzh commented 9 months ago
        with torch.no_grad():
            val_metrics = corpus.get_validation_pred(model, 'test')
        if val_metrics['Mean Reciprocal Rank'] > best_test_metrics['Mean Reciprocal Rank']:
            best_test_metrics['Mean Reciprocal Rank'] = val_metrics['Mean Reciprocal Rank']
        if val_metrics['Mean Rank'] < best_test_metrics['Mean Rank']:
            best_test_metrics['Mean Rank'] = val_metrics['Mean Rank']
        if val_metrics['Hits@1'] > best_test_metrics['Hits@1']:
            best_test_metrics['Hits@1'] = val_metrics['Hits@1']
        if val_metrics['Hits@3'] > best_test_metrics['Hits@3']:
            best_test_metrics['Hits@3'] = val_metrics['Hits@3']
        if val_metrics['Hits@10'] > best_test_metrics['Hits@10']:
            best_test_metrics['Hits@10'] = val_metrics['Hits@10']
        if val_metrics['Hits@100'] > best_test_metrics['Hits@100']:
            best_test_metrics['Hits@100'] = val_metrics['Hits@100']

print(' '.join(['Test set results:',
                model.format_metrics(best_test_metrics, 'test')]))

In the code, the result obtained from this evaluation is not the best epoch result, but the best value of all indicators. They do not appear in the same epoch.

sleepyzh commented 9 months ago

GAT( (spgat): SpGAT( (dropout_layer): Dropout(p=0.3, inplace=False) (attention_0): SpGraphAttentionLayer (256 -> 256) (attention_1): SpGraphAttentionLayer (256 -> 256) (out_att): SpGraphAttentionLayer (512 -> 512) ) ) In GATencoder, with 2 heads and a dim of 256, the final output dimension of the model is 512, which does not match the embedded dim.

sleepyzh commented 9 months ago

The context model you proposed in the paper was not found in the code,Comparative learning(def contrastive_loss(self, s_embed, v_embed, t_embed):) is not used in the default code, how do you use it. Looking forward to your reply

sun8982 commented 2 months ago

The context model you proposed in the paper was not found in the code,Comparative learning(def contrastive_loss(self, s_embed, v_embed, t_embed):) is not used in the default code, how do you use it. Looking forward to your reply

May I ask what version of the environment you are running the code in? I used a 1.7 torch and an 11.0 cuda to run on the 3090 graphics card, but they couldn't run and reported the error :"Traceback (most recent call last):" File "main. py", line 234, in Train_encoder (args) File "main. py", line 119, in train_encoder Entity_embed, relation_embed=model. forward (corpus. train.adj_matrix, train_indices) File "/media/xxx/4962f605-dcf4-4c7f-95a6-36b784b7ace0/gpuserver/sh/paper_code/IMF Pytorch/models/model. py", line 78, in forward Self. entity_embeddings. data=F.normalize (self. entity_embeddings. data, dim=1) File "/home/xxx/anaconda3/envs/imf/lib/python3.7/site packages/torch/nn/functional. py", line 3788, in normalize Denom=input. norm (p, dim, keepdim=True). slap_min (eps). expand.as (input) File "/home/xxx/anaconda3/envs/imf/lib/python3.7/site packages/torch/sensor. py", line 389, in norm Return torch. norm (self, p, dim, keepdim, dtype=dtype) File "/home/xxx/anaconda3/envs/imf/lib/python3.7/site packages/torch/functional. py", line 1337, in norm Return VVF. norm (input, p, _dim, keepdim=keepdim) # type: ignore "RuntimeError: CUDA error: no kernel image is available for execution on the device"