This repo contains an
incremental sequence of notebooks designed to teach deep learning, MXNet, and
the gluon
interface. Our goal is to leverage the strengths of Jupyter
notebooks to present prose, graphics, equations, and code together in one place.
If we're successful, the result will be a resource that could be simultaneously
a book, course material, a prop for live tutorials, and a resource for
plagiarising (with our blessing) useful code. To our knowledge there's no source
out there that teaches either (1) the full breadth of concepts in modern deep
learning or (2) interleaves an engaging textbook with runnable code. We'll find
out by the end of this venture whether or not that void exists for a good
reason.
Another unique aspect of this book is its authorship process. We are developing this resource fully in the public view and are making it available for free in its entirety. While the book has a few primary authors to set the tone and shape the content, we welcome contributions from the community and hope to coauthor chapters and entire sections with experts and community members. Already we've received contributions spanning typo corrections through full working examples.
Throughout this book,
we rely upon MXNet to teach core concepts, advanced topics, and a full
complement of applications. MXNet is widely used in production environments
owing to its strong reputation for speed. Now with gluon
, MXNet's new
imperative interface (alpha), doing research in MXNet is easy.
To run these notebooks, you'll want to build MXNet from source. Fortunately, this is easy (especially on Linux) if you follow these instructions. You'll also want to install Jupyter and use Python 3 (because it's 2017).
The authors (& others) are increasingly giving talks that are based on the content in this books. Some of these slide-decks (like the 6-hour KDD 2017) are gigantic so we're collecting them separately in this repo. Contribute there if you'd like to share tutorials or course material based on this books.
As we write the book, large stable sections are simultaneously being translated into 中文, available in a web version and via GitHub source.
Chapter 1: Crash course
Chapter 2: Introduction to supervised learning
Chapter 3: Deep neural networks (DNNs)
gluon
)gluon
)gluon.Block
and gluon.nn.Sequential()
gluon.Block
Chapter 4: Convolutional neural networks (CNNs)
Chapter 5: Recurrent neural networks (RNNs)
gluon
)Chapter 6: Optimization
gluon
gluon
gluon
gluon
gluon
gluon
Chapter 7: Distributed & high-performance learning
gluon
)Chapter 8: Computer vision (CV)
Chapter 9: Natural language processing (NLP)
Chapter 10: Audio processing
Chapter 11: Recommender systems
Chapter 12: Time series
gluon
)gluon
)Chapter 13: Unsupervised learning
Chapter 14: Generative adversarial networks (GANs)
Chapter 15: Adversarial learning
Chapter 16: Tensor Methods
Chapter 17: Deep reinforcement learning (DRL)
Chapter 18: Variational methods and uncertainty
gluon
Chapter 19: Graph Neural Networks
gluon
We've designed these tutorials so that you can traverse the curriculum in more than one way.
gluon
, you can skip (from scratch!) tutorials and go straight to the production-like code using the high-level gluon
front end.This evolving creature is a collaborative effort (see contributors tab). The lead writers, assimilators, and coders include:
In creating these tutorials, we've have drawn inspiration from some the resources that allowed us to learn deep / machine learning with other libraries in the past. These include: