Closed xptree closed 3 years ago
Importing embedding from file is supported in v0.2.1. See the load section in configuration file.
To only do evaluation, you can skip the train section by setting num_epoch
to 0 in the train section.
I am trying to evaluate a trained model as specified in the load section, but it seems to initialize the embeddings again, instead of loading the trained ones. Do you have a hint on what I am doing wrong in my configuration:
application: knowledge graph
resource: gpus: [] cpu_per_gpu: auto dim: 512
graph:
file_name:
build: optimizer: type: Adam lr: 5.0e-6 weight_decay: 0 num_partition: auto num_negative: 64 batch_size: 100000 episode_size: 1
load: file_name: transe_wn18.pkl
train: model: TransE num_epoch: 0 margin: 12 sample_batch_size: 2000 adversarial_temperature: 2 log_frequency: 100 resume: true
evaluate:
task: link prediction
file_name:
I am trying to evaluate a trained model as specified in the load section, but it seems to initialize the embeddings again, instead of loading the trained ones. Do you have a hint on what I am doing wrong in my configuration:
application: knowledge graph
resource: gpus: [] cpu_per_gpu: auto dim: 512
graph: file_name:
build: optimizer: type: Adam lr: 5.0e-6 weight_decay: 0 num_partition: auto num_negative: 64 batch_size: 100000 episode_size: 1
load: file_name: transe_wn18.pkl
train: model: TransE num_epoch: 0 margin: 12 sample_batch_size: 2000 adversarial_temperature: 2 log_frequency: 100 resume: true
evaluate: task: link prediction file_name:
filter_files:
I meet the same situation. Have you solved it?
After reading the doc, I find my problem. I can successfully load the saved embedding model, and test it without training. Here is my configure.
application:
knowledge graph
resource:
gpus: []
cpu_per_gpu: auto
dim: 1024
graph:
file_name: <wn18.train>
build:
optimizer:
type: Adam
lr: 5.0e-6
weight_decay: 0
num_partition: auto
num_negative: 64
batch_size: 100000
episode_size: 1
train:
model: RotatE
num_epoch: 0
margin: 9
sample_batch_size: 2000
adversarial_temperature: 2
log_frequency: 100
resume: True
load:
file_name: rotate_wn18.pkl
evaluate:
task: link prediction
file_name: <wn18.test>
filter_files:
- <wn18.train>
- <wn18.valid>
- <wn18.test>
# fast_mode: 3000
I set num_epoch
as 0, and set resume
as True
. And I find that if resume
is set as False
or not set, the evaluation result is totally wrong (seems to initialize the embeddings again). So I think it is very important to set resume
as True
if you just want to reload saved embedding model and evaluate on test set without training again.
PS: My graphvite
version is v0.2.2
.
Yes this is resolved in v0.2.1+. Sorry we forget to close the issue.
I wonder what is the best way to only do evaluation with GraphVite, i.e., import embedding from file and evaluation it on some tasks.