google-research / data-driven-advection

Apache License 2.0
184 stars 56 forks source link

Errors running advection_1d example #10

Closed ghost closed 5 years ago

ghost commented 5 years ago

I am getting errors while running the advection_1d example given in this python notebook. The version I have for tensorflow is same as the version shown in that example:

tf.__version__, tf.keras.__version__
('1.13.1', '2.2.4-tf')

The error I am getting is at the following line, just before neural network is trained with the data:

model_nn.compile(optimizer=tf.keras.optimizers.Adam(0.001 * 1), loss='mae', metrics=[tf.keras.metrics.RootMeanSquaredError()])

AttributeError: module 'tensorflow._api.v1.keras.metrics' has no attribute 'RootMeanSquaredError'

I checked the class tf.keras.metrics and it does not have any function called RootMeanSquaredError, which is the reason for error. After some search, I modified the metrics in above line throwing the error as metrics=["RootMeanSquaredError"], which helps get rid of that particular error, but I get another error on the next line as follows:

history = model_nn.fit(train_input, train_output, epochs=80, batch_size=32, verbose=1, shuffle=True)

ValueError: When running a model in eager execution, the optimizer must be an instance of tf.train.Optimizer. Received: <tensorflow.python.keras.optimizers.Adam object at 0x7f0c885da400>

The above error does not go away whether I enable the eager execution or not. I was curious how the example in that notebook ran with the exact same version of tensorflow that I am also using with Python 3.6. I would appreciate your help with this.

JiaweiZhuang commented 5 years ago

Thanks for the report. I can reproduce the same error withtensorflow 1.13.1. Updating to tensorflow 1.14.0 solved the problem. Here're the complete installation commands:

conda create -n pde python=3.6
conda activate pde
pip install tensorflow==1.14.0
pip install git+git//github.com/google-research/data-driven-pdes

Alternatively, you can just drop the metrics argument completely (it won't affect training), and simply use optimizer='adam' (the original code just uses the default learning rate).

ghost commented 5 years ago

Thank you, @JiaweiZhuang . Changing the version of tensorflow as suggested makes it move past those two errors. However, before changing the version of tensorflow, I tried the alternate option you suggested, which although solves the issue of metrics argument, the second error I mentioned earlier still remains.

JiaweiZhuang commented 5 years ago

the second error I mentioned earlier still remains.

Ah, you are right. I also got AttributeError: 'Adam' object has no attribute 'apply_gradients' when using optimizer='adam' with tensorflow 1.13.1.

Maybe this is related to the migration to TF 2.0 (e.g. tensorflow/tensorflow#27386). We should probably require tensorflow==1.14.0 for now.

ghost commented 5 years ago

Thank you.