tensorflow / agents

TF-Agents: A reliable, scalable and easy to use TensorFlow library for Contextual Bandits and Reinforcement Learning.
Apache License 2.0
2.81k stars 720 forks source link

ValueError: Inputs to TanhNormalProjectionNetwork must match the sample_spec.dtype. #508

Open lamhoson opened 4 years ago

lamhoson commented 4 years ago

Anyone know what's wrong of using tf-agent here which trigger the ValueError? ValueError: Inputs to TanhNormalProjectionNetwork must match the sample_spec.dtype. In call to configurable 'SacAgent' (<class 'tf_agents.agents.sac.sac_agent.SacAgent'>)

    self._action_spec = array_spec.BoundedArraySpec( #https://www.tensorflow.org/agents/api_docs/python/tf_agents/specs/BoundedArraySpec
        shape=(), dtype=np.float64, #RefA 4. () is scalar shape, np([4,5,6]) shape=(3,) 3scalars
                                    # np([[4, 5, 6]]) shape=(1,3) 1vector
        minimum=ACT_MIN, maximum=ACT_MAX, name='trade action') # Buy at a rate and sell anyway 3:30p

    self._observation_spec = array_spec.BoundedArraySpec(shape=OBS_SHAPE, dtype=np.float64, #(1,2)/(2,))
        minimum=[-INVEST_BUDGET, 0], maximum=[sys.float_info.max, weekday['SAT']],
        name='[profit, weekday]') #debug min=[-xx, 0] must 0. Ref B

All variables and Agents need to be created under strategy.scope()

with strategy.scope():
    critic_net = critic_network.CriticNetwork(
          (observation_spec, action_spec),
          observation_fc_layer_params=None,
          action_fc_layer_params=None,
          joint_fc_layer_params=critic_joint_fc_layer_params,
          kernel_initializer='glorot_uniform',
          last_kernel_initializer='glorot_uniform')

with strategy.scope():
    actor_net = actor_distribution_network.ActorDistributionNetwork(
      observation_spec,
      action_spec,
      fc_layer_params=actor_fc_layer_params,
      continuous_projection_net=(
          tanh_normal_projection_network.TanhNormalProjectionNetwork))

with strategy.scope():
    train_step = train_utils.create_train_step()
    tf_agent = sac_agent.SacAgent(
        time_step_spec,
        action_spec,
        actor_network=actor_net,
        critic_network=critic_net,
        actor_optimizer=tf.compat.v1.train.AdamOptimizer(
            learning_rate=actor_learning_rate),
        critic_optimizer=tf.compat.v1.train.AdamOptimizer(
            learning_rate=critic_learning_rate),
        alpha_optimizer=tf.compat.v1.train.AdamOptimizer(
            learning_rate=alpha_learning_rate),
        target_update_tau=target_update_tau,
        target_update_period=target_update_period,
        td_errors_loss_fn=tf.math.squared_difference,
        gamma=gamma,
        reward_scale_factor=reward_scale_factor,
        train_step_counter=train_step)    
    tf_agent.initialize()
oars commented 4 years ago

It's unclear from the small piece of the error you pasted, but I suspect you are using ArraySpecs where TensorSpecs are expected. Note that py components expect ArraySpecs, but all TF components expect TensorSpecs.

You can convert them to tensor specs with tensor_spec.from_spec

If that's not the actual issue, can you add your whole traceback?