But these introductory examples are buggy. As a beginner on deeplearning, it is not obvious for me to correct some simple bugs.
Those notebooks are very old and are not working anymore.
**OS Platform and Distribution MacOSX: Big Sur (but same on binder)
astroNN (Build or Version): master
Did you try the latest astroNN commit?: I have done git clone from master
TensorFlow installed from (source or binary, official build?): pip install
TensorFlow version: tensorflow 2.12.0
Python version: Python 3.9.16
Exact command/script to reproduce (if applicable):
Describe the problem
Describe the problem clearly here. Be sure to describe here why it's a bug in astroNN (instead of Tensorflow's problem) or a feature request.
Among the 4 examples
Uncertainty_Demo_MNIST.ipynb --> OK
Uncertainty_Demo_quad.ipynb --> Does not work
Uncertainty_Demo_x_sinx.ipynb --> Does not work
Uncertainty_Demo_x_sinx_tfp.ipyn --> Does note work
After minor numpy format correction I have found inUncertainty_Demo_quad.ipynb , the generator generate_train_batch(x, y, y_err) is not accepted by model.fit(), more over the proposed model.fit_generator() is not accepted anymore by Tensorflow.
In the section Third, use a single model to get both epistemic and aleatoric uncertainty with variational inference
I tried to skip the generator by providing directly the data not involving any generator, but the data format was not accepted.
I have no deep knowledge in Tensorflow to understand the data format error.
TypeError: You are passing KerasTensor(type_spec=TensorSpec(shape=(), dtype=tf.float32, name=None), name='Placeholder:0', description="created by layer 'tf.cast_2'"), an intermediate Keras symbolic input/output, to a TF API that does not allow registering custom dispatchers, such as `tf.cond`, `tf.function`, gradient tapes, or `tf.map_fn`. Keras Functional model construction only supports TF API calls that *do* support dispatching, such as `tf.math.add` or `tf.reshape`. Other APIs cannot be called directly on symbolic Kerasinputs/outputs. You can work around this limitation by putting the operation in a custom Keras layer `call` and calling that layer on this symbolic input/output.
I hope you could quickly fix these simple examples such I could start from a simple working example.
Thanks for the bug report! I have just pushed a fix for those notebooks, you can check if they are working for you now. Please close this issue if the issue is fixed now.
System information
But these introductory examples are buggy. As a beginner on deeplearning, it is not obvious for me to correct some simple bugs.
Those notebooks are very old and are not working anymore.
**OS Platform and Distribution MacOSX: Big Sur (but same on binder)
astroNN (Build or Version): master
Did you try the latest astroNN commit?: I have done git clone from master
TensorFlow installed from (source or binary, official build?): pip install
TensorFlow version: tensorflow 2.12.0
Python version: Python 3.9.16
Exact command/script to reproduce (if applicable):
Describe the problem
Describe the problem clearly here. Be sure to describe here why it's a bug in astroNN (instead of Tensorflow's problem) or a feature request.
Among the 4 examples
After minor numpy format correction I have found inUncertainty_Demo_quad.ipynb , the generator generate_train_batch(x, y, y_err) is not accepted by model.fit(), more over the proposed model.fit_generator() is not accepted anymore by Tensorflow.
In the section
Third, use a single model to get both epistemic and aleatoric uncertainty with variational inference
I tried to skip the generator by providing directly the data not involving any generator, but the data format was not accepted.
I have no deep knowledge in Tensorflow to understand the data format error.
I hope you could quickly fix these simple examples such I could start from a simple working example.
Many thanks.