Fast
2.5x+ speedup over a compiled PyTorch* model on Apple Silicon on early benchmarks. Expect similar performance gains across more architectures and platforms** as MKL/CUDA support improves and Zigrad's ML graph compiler is operational.
*Tensorflow excluded for scaling purposes. **A hermetic, reproducible benchmarking pipeline built on Bazel will allow testing across more platforms (in progress).
Built for specialized optimization
Zigrad's design enables deep control and customization
But wait, there's more..
ReleaseFast
mode and under 200kb in ReleaseSmall
.*Not yet merged
An example of tracing the computation graph generated by a fully connected neural network for MNIST.
28x28 -> 784
784 -> 128
128 -> 64
64 -> 10
We did not have to use Zigrad's modules to write this network at all, as Zigrad is backed by a capable autograd engine. Even when using the autograd backend to dynamically construct the same neural network Zigrad can still trace the graph and render it.
Note: Since the graph is generated from the autograd information, we set the labels for the nodes by naming the tensors for the sake of the diagram.
A lot is planned and hoping for support from the Zig community so we can accomplish some of the more ambitious goals.