awslabs / dgl-ke

High performance, easy-to-use, and scalable package for learning large-scale knowledge graph embeddings.
https://dglke.dgl.ai/doc/
Apache License 2.0
1.28k stars 196 forks source link

Add advanced hyperparameter tuning #87

Open zheng-da opened 4 years ago

zheng-da commented 4 years ago

The technique described in the paper "AutoNE: Hyperparameter Optimization for Massive Network Embedding" is interesting. Similar techniques should be incorporated into DGL-KE to tune hyperparameters on large knowledge graphs effectively.

AlexMRuch commented 4 years ago

This sounds more promising than what I was considering: https://optuna.readthedocs.io/en/latest/tutorial/index.html <-- using subprocess to yank the dgl-ke training output (or directly imbed it into the training loop) and using pruning