I'm tuning perf of a TF model with ngraph-bridge v0.22, and I have some initial questions about this project:
There are two ways to register ngraph pass: a) optimization passes, b) grappler optimizer. It seems that the first method has been kind of deprecated (?). So what's the difference between these two?
How much perf gain we could get by setting enable_variables_and_optimizers?
How about the PlaidML op coverage of this version?
Is there any general guide about tuning ngraph-tf performance (build flags, runtime configs, caveats to different workloads)?
I'm tuning perf of a TF model with ngraph-bridge v0.22, and I have some initial questions about this project:
There are two ways to register ngraph pass: a) optimization passes, b) grappler optimizer. It seems that the first method has been kind of deprecated (?). So what's the difference between these two?
How much perf gain we could get by setting
enable_variables_and_optimizers
?How about the PlaidML op coverage of this version?
Is there any general guide about tuning ngraph-tf performance (build flags, runtime configs, caveats to different workloads)?