Open zheng-da opened 4 years ago
This sounds more promising than what I was considering: https://optuna.readthedocs.io/en/latest/tutorial/index.html <-- using subprocess
to yank the dgl-ke
training output (or directly imbed it into the training loop) and using pruning
The technique described in the paper "AutoNE: Hyperparameter Optimization for Massive Network Embedding" is interesting. Similar techniques should be incorporated into DGL-KE to tune hyperparameters on large knowledge graphs effectively.