ezyang / pytorch-unattached

Tensors and Dynamic neural networks in Python with strong GPU acceleration
http://pytorch.org
Other
20 stars 8 forks source link

Unrolled RNNs expose compiler performance problems #245

Open ezyang opened 7 years ago

ezyang commented 7 years ago

We should expect our users to try slapping torch.jit.compile on the top level of an RNN, which means that however long they unroll the RNN, the compiler is going to run on it. That means that, on real world RNNs, compiler performance may actually be a problem.

For example, I took bnlstm (https://github.com/pytorch/benchmark/blob/master/benchmarks/bnlstm.py) and ran compile on the top-level model. Because bnlstm was fed pixel-by-pixel the MNIST data set, it has 784 hidden layers. It took 17s to compile the trace (and this is a conservative guess, because for other reasons compilation failed midway through.)

We don't necessarily have to fix this, but this will be something important to communicate to users.