awslabs / dgl-ke

High performance, easy-to-use, and scalable package for learning large-scale knowledge graph embeddings.
https://dglke.dgl.ai/doc/
Apache License 2.0
1.25k stars 194 forks source link

I run the source code and get the unexpected result, can somebody help ? #233

Open yulong-CSAI opened 2 years ago

yulong-CSAI commented 2 years ago

I start to read the dglke source code, and decide to develep on this code, However, when I run the source code , I got the unexpected result. I run the source code by this command: python train.py --model_name TransE_l2 --dataset FB15k --batch_size 1000 --neg_sample_size 200 --hidden_dim 400 --gamma 19.9 --lr 0.25 --max_step 500 --log_interval 100 --batch_size_eval 16 --test -adv --regularization_coef 1.00E-09 --num_thread 1 --num_proc 48 the result is: image

yulong-CSAI commented 2 years ago

can sb help?

yulong-CSAI commented 2 years ago

did I miss something?

classicsong commented 2 years ago

How many cpus do you have?

classicsong commented 2 years ago

If you have 24 cores, you should use

dglke_train --model_name TransE_l2 --dataset FB15k --batch_size 1000 --neg_sample_size 200 --hidden_dim 400 --gamma 19.9 --lr 0.25 --max_step 500 --log_interval 100 --batch_size_eval 16 --test -adv --regularization_coef 1.00E-09 --num_thread 1 --num_proc 24

You need to change num_proc and the max_step.

yulong-CSAI commented 2 years ago

If you have 24 cores, you should use

dglke_train --model_name TransE_l2 --dataset FB15k --batch_size 1000 --neg_sample_size 200 --hidden_dim 400 --gamma 19.9 --lr 0.25 --max_step 500 --log_interval 100 --batch_size_eval 16 --test -adv --regularization_coef 1.00E-09 --num_thread 1 --num_proc 24

You need to change num_proc and the max_step.

Thanks to reply,It works !!