nicola-decao / BNAF

Pytorch implementation of Block Neural Autoregressive Flow
http://arxiv.org/abs/1904.04676
MIT License
175 stars 33 forks source link

scalability #3

Closed Char-Aznable closed 4 years ago

Char-Aznable commented 4 years ago

Hi @nicola-decao very nice work! I am thinking of using BNAF to do variational inference where the posterior is over a few thousand to tens of thousands dimensional space. I wonder if the current implementation can scale up to that many dimension. My concern is that the model might not fit into the GPU memory. Can you provide an estimate of the space complexity a given architecture consisting of, say, n stacked flows of m hidden layers each? I know you gave an estimate of number of parameters in table 2 in the paper but how does that translate into memroy requirement? I appreciate your insight in this because I am more of a tensorflow person so trying this out in pytorch will likely take me a while. Thanks in advance!

nicola-decao commented 4 years ago

Hi @Char-Aznable, a single flow block have space complexity of O(m * k^2) where k is your data dimensionality and m the number of hidden layers. So for n stacked flows if would be O(n m k^2). I hope it helps.