I wanted to compute gradients of log prob density w.r.t. the inputs, but apparently the gradient are not propagated to inputs and the result is 'None'. Would you suggest a fix or workaround? Thanks!
`import torch
import torch.nn.functional as F
from nflows import transforms, distributions, flows
Dear All,
I wanted to compute gradients of log prob density w.r.t. the inputs, but apparently the gradient are not propagated to inputs and the result is 'None'. Would you suggest a fix or workaround? Thanks!
`import torch import torch.nn.functional as F from nflows import transforms, distributions, flows
def main(): flow = flows.MaskedAutoregressiveFlow( features=1, hidden_features=16, num_layers=1, num_blocks_per_layer=2, use_residual_blocks=False, use_random_masks=False, use_random_permutations=False, activation=F.elu, dropout_probability=0.0, batch_norm_within_layers=False, batch_norm_between_layers=False, )
if name == 'main': main()`