tianheyu927 / mopo

Code for MOPO: Model-based Offline Policy Optimization
MIT License
171 stars 42 forks source link

Any plan to fit in tensorflow_gpu 2 or 2.7? #13

Open StaminaTang opened 2 years ago

StaminaTang commented 2 years ago

Hello! Do you have any plan to fit in tensorflow-gpu 2?

Because of the python and cuda version, I have to use mopo with tensorflow-gpu 2.7. So I have done some normal modifications to fit it from 1.14.0 to 2.7. Before adding in tape, it says tape should be added as:

raise ValueError("tape is required when a Tensor loss is passed. " ValueError: tape is required when a Tensor loss is passed. Received: loss=Tensor("BNN_1/add_9:0", shape=(), dtype=float32), tape=None.

But once I added var_list=self.optvars,tape=tf.GradientTape() in BNN.py, it raised error as below:

File "/code/com//mopo/mopo/models/bnn.py", line 256, in finalize self.train_op = self.optimizer.minimize(train_loss, var_list=self.optvars, tape=tf.GradientTape()) File "/root/miniconda3/envs/mopo/lib/python3.9/site-packages/keras/optimizer_v2/optimizer_v2.py", line 532, in minimize return self.apply_gradients(grads_and_vars, name=name) File "/root/miniconda3/envs/mopo/lib/python3.9/site-packages/keras/optimizer_v2/optimizer_v2.py", line 633, in apply_gradients grads_and_vars = optimizer_utils.filter_empty_gradients(grads_and_vars) File "/root/miniconda3/envs/mopo/lib/python3.9/site-packages/keras/optimizer_v2/utils.py", line 73, in filter_empty_gradients raise ValueError(f"No gradients provided for any variable: {variable}. "

ValueError: No gradients provided for any variable: (['BNN/Layer0_mean/FC_weights:0', 'BNN/Layer0_mean/FC_biases:0', 'BNN/Layer1_mean/FC_weights:0', 'BNN/Layer1_mean/FC_biases:0', 'BNN/Layer2_mean/FC_weights:0', 'BNN/Layer2_mean/FC_biases:0', 'BNN/Layer3_mean/FC_weights:0', 'BNN/Layer3_mean/FC_biases:0', 'BNN/Layer4_mean/FC_weights:0', 'BNN/Layer4_mean/FC_biases:0', 'BNN/Layer0_var/FC_weights:0', 'BNN/Layer0_var/FC_biases:0', 'BNN/max_log_var:0', 'BNN/min_log_var:0'],). Provided grads_and_vars is ((None, <tf.Variable 'BNN/Layer0_mean/FC_weights:0' shape=(7, 14, 200) dtype=float32>), (None, <tf.Variable 'BNN/Layer0_mean/FC_biases:0' shape=(7, 1, 200) dtype=float32>), (None, <tf.Variable 'BNN/Layer1_mean/FC_weights:0' shape=(7, 200, 200) dtype=float32>), (None, <tf.Variable 'BNN/Layer1_mean/FC_biases:0' shape=(7, 1, 200) dtype=float32>), (None, <tf.Variable 'BNN/Layer2_mean/FC_weights:0' shape=(7, 200, 200) dtype=float32>), (None, <tf.Variable 'BNN/Layer2_mean/FC_biases:0' shape=(7, 1, 200) dtype=float32>), (None, <tf.Variable 'BNN/Layer3_mean/FC_weights:0' shape=(7, 200, 200) dtype=float32>), (None, <tf.Variable 'BNN/Layer3_mean/FC_biases:0' shape=(7, 1, 200) dtype=float32>), (None, <tf.Variable 'BNN/Layer4_mean/FC_weights:0' shape=(7, 200, 267) dtype=float32>), (None, <tf.Variable 'BNN/Layer4_mean/FC_biases:0' shape=(7, 1, 267) dtype=float32>), (None, <tf.Variable 'BNN/Layer0_var/FC_weights:0' shape=(7, 200, 267) dtype=float32>), (None, <tf.Variable 'BNN/Layer0_var/FC_biases:0' shape=(7, 1, 267) dtype=float32>), (None, <tf.Variable 'BNN/max_log_var:0' shape=(1, 267) dtype=float32>), (None, <tf.Variable 'BNN/min_log_var:0' shape=(1, 267) dtype=float32>)).

Do you have any idea to solve this problem? And do you have any plan to fit in tensorflow-gpu 2?

Thank you very much!