-
您好!我用当前版本的fast-transe和前一两个版本跑WN18数据集
出来的结果大致如下(参数使用size = 50,epoch = 1000,alpha = 0.001)
left 426.596802 0.798800
left(filter) 411.633392 0.943800
right 446.216400 0.808000
right(filter) 432.7344…
-
Hello,
When I fit a model, specifiyng a list of entities for the corruption ("corruption_entities" parameter in the "early_stopping_params" dictionnary), I always get the following message which is…
-
### Description
I am trying to replicate the performance results from the documentation but getting a runtime error when using early stopping with FB15k-237.
I have used early stopping with other…
-
Dataset | FB15k | FB15k-237 | wn18 | wn18rr
MRR | .797 ± .001 | **.949 ± .000** | **.337 ± .001** | .477 ± .001
The MRR of datasets FB15k-237 and wn18 should be swapped with each other.
-
The training process (except the early stop process and test process) take me 7 hours on wn18 dataset using the default parameters on a Linux machine (One GTX 1080 ti). Is this normal?
And I find t…
-
I want to know the fairness of this experiment.
The article describes how to handle test sets and training sets. But Neural-LP reasons based on known facts and rules. The facts is splited from the i…
-
**Description**
We need to publish _single_ script that reproduces our best results shown in #17.
e.g.:
`$ ./predictive_performance.py -i fb15k_237 -m complex`
The script _may_ take up to two ar…
-
Can you provide the parameters for reproducing the results from the paper on `FB15k` and `FB15K-237`? I ran the command from the README:
```
CUDA_VISIBLE_DEVICES=0 python main.py --dataset FB15k-2…
-
使用原始的train_wn18.sh训练时,每次eval时内存都会增加800MB左右的大小,是发生了内存泄露吗?
如果将eval_freq调小,应该可以显著观察到
环境是ubuntu18.04+tensorflow 1.13.0-rc0
我使用了tracemalloc观察内存,但没有发现异常之处
-
### Description
I was trying to run the following example code for the wn18 dataset (I only changed the regularizer form None to L2 because it was also giving me an error):
**Train and evaluate an…