Open 0xtristan opened 3 years ago
I do not know how to fix it, but the reason for the error can be seen in this simple example:
import numpy as np import tensorflow as tf dtype = tf.bfloat16.as_numpy_dtype vector = np.array([8193], dtype=dtype) np.finfo(vector.dtype)
results in
ValueError: data type <class 'bfloat16'> not inexact
Hi. Are there any updates on [updated] how to fix this issue? Thanks!
Hi,
I also faced this issue when using the bfloat16 data type with the DenseFlipout Layer. Below is the error:
ValueError: data type <class 'bfloat16'> not inexact
As @ak2LT mentioned (thank you for finding that out), this occurs due to passing bfloat16 as the data type to the following line of code:
np.finfo(dtype.as_numpy_dtype).eps
which is in default_loc_scale_fn in tfp.python.layers.util.
The issue can be temporarily fixed by hard coding the epsilon value for bfloat16 like this:
eps = tf.constant(2**-7, dtype=bfloat16)
Hi, I was trying to train a model using a RelaxedOneHotCategorical wrapped in a DistributionLambda layer on TPU. Unfortunately when using the bfloat16 TPU mixed precision option I run into the following error:
ValueError: data type <class 'bfloat16'> not inexact
This seems to occur from within the sampling function of the distribution: 275 uniform = samplers.uniform( 276 shape=uniform_shape, --> 277 minval=np.finfo(dtype_util.as_numpy_dtype(self.dtype)).tiny, 278 maxval=1., 279 dtype=self.dtype,
Minimalist example to reproduce:
tfpl.DistributionLambda(lambda x: tfd.RelaxedOneHotCategorical(0.0, x))(tf.ones(1))
Any help would be appreciated, thanks.