tensorflow / tensor2tensor

Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
Apache License 2.0
15.5k stars 3.49k forks source link

xla_compile flag is ignored / disabled #1439

Open etragas-fathom opened 5 years ago

etragas-fathom commented 5 years ago

Background

I saw no speed up after running some experiments passing in the xla_compile flag as True.

Digging deeper, it seems the flag does nothing.

The flag is defined here: https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/bin/t2t_trainer.py#L57

And is used uniquely here: https://github.com/tensorflow/tensor2tensor/blob/28adf2690c551ef0f570d41bef2019d9c502ec7e/tensor2tensor/bin/t2t_trainer.py#L196

That takes us to create_experiment_fn which just calls create_experiment under the hood.

In create_experiment we see use_xla being passed to create_estimator here but then create_estimator does nothing with use_xla, and instead deletes it

multibla commented 5 years ago

hi, I also met this problem, and another problem is that when i set xla_jit_level = 1, it becomes much slower, about 5 times slower than the baseline. But according the tensorflow official document, it seems that set xla_jit_level=1 can speed up training.