tensorflow / probability

Probabilistic reasoning and statistical analysis in TensorFlow
https://www.tensorflow.org/probability/
Apache License 2.0
4.27k stars 1.1k forks source link

Conditional input with multiple flows #1801

Closed riccardo-seppi closed 8 months ago

riccardo-seppi commented 8 months ago

I am struggling to implement a network with multiple flows, making sure that each flow uses a conditional input. I am following the example with conditionals here: https://www.tensorflow.org/probability/api_docs/python/tfp/bijectors/AutoregressiveNetwork

This is how I modified it, but it seems that the 'conditional_input' is not passed to each MaskedAutoregressiveFlow correctly:

import numpy as np import tensorflow as tf import tensorflow_probability as tfp

tfk = tf.keras tfkl = tf.keras.layers

tfd = tfp.distributions tfb = tfp.bijectors

def create_maf(hidden_units): return tfb.MaskedAutoregressiveFlow( shift_and_log_scale_fn=tfb.AutoregressiveNetwork( params=2, hidden_units=hidden_units, activation='sigmoid', event_shape=(1,), conditional=True, conditional_event_shape=(1,) ) )

def train_maf(x, c, num_flows=2, hidden_units=[2, 2], epochs=3, batch_size=25): maf_bijectors = [create_maf(hiddenunits) for in range(num_flows)] bijector_chain = tfb.Chain(maf_bijectors) base_distribution = tfd.Sample(tfd.Normal(loc=0., scale=1.), sample_shape=[1]) distribution = tfd.TransformedDistribution( distribution=base_distribution, bijector=bijector_chain )

x_ = tfkl.Input(shape=(1,), dtype=tf.float32)
c_ = tfkl.Input(shape=(1,), dtype=tf.float32)
log_prob_ = distribution.log_prob(
    x_, bijector_kwargs={'conditional_input': c_}
)
model = tfk.Model([x_, c_], log_prob_)

model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.1),
              loss=lambda _, log_prob: -log_prob)

model.fit(x=[x, c],
          y=np.zeros((len(x), 0), dtype=np.float32),
          batch_size=batch_size,
          epochs=epochs,
          steps_per_epoch=len(x) // batch_size,
          shuffle=True,
          verbose=True)

return distribution

Generate data as the mixture of two distributions.

n = 2000 c = np.r_[ np.zeros(n//2), np.ones(n//2) ] mean_0, mean1 = 0, 5 x = np.r[ np.random.randn(n//2).astype(dtype=np.float32) + mean_0, np.random.randn(n//2).astype(dtype=np.float32) + mean_1 ]

Density estimation with MADE.

distribution = train_maf(x, c, num_flows=4, hidden_units=[2, 2], epochs=3)

Use the fitted distribution to sample condition on c = 1

n_samples = 1000 cond = 1 samples = distribution.sample( (n_samples,), bijector_kwargs={'conditional_input': cond * np.ones((n_samples, 1))} )

The output I get: ValueError: Exception encountered when calling layer "autoregressive_network" (type AutoregressiveNetwork).

conditional_input must be passed as a named argument

Call arguments received by layer "autoregressive_network" (type AutoregressiveNetwork): • x=tf.Tensor(shape=(None, 1), dtype=float32) • conditional_input=None

riccardo-seppi commented 8 months ago

I solved it by naming each MaskedAutoregressiveFlow with a common root, e.g. maf0, maf1, maf2... and providing the biijector_kwargs with the make_bijector_kwargs functions provided here: https://github.com/tensorflow/probability/issues/1006#issuecomment-663141106 I also found similar problems here: https://github.com/tensorflow/probability/issues/1159 and https://github.com/tensorflow/probability/issues/1410