Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
XLA is an abstraction layer of computation graph for better efficiency, consistency, portability, and a lot as they claim. However, the most significant feature is to enable access to google's TPU or other customized accelerator hardware that uses the same abstraction.
Sample code to use XLA device could be:
from mxnet import nd
from mxnet_xla import xla
x = nd.ones((4,5), ctx=xla.xla_device())
print(x)
Description
XLA is an abstraction layer of computation graph for better efficiency, consistency, portability, and a lot as they claim. However, the most significant feature is to enable access to google's TPU or other customized accelerator hardware that uses the same abstraction.
Sample code to use XLA device could be:
References