This is similar to #656. But I am making another issue since that issue is still not resolved. Also, @sguada mentioned in #702 that PPO agent can take 1-D action spaces.
I have the following action spec:
BoundedArraySpec(shape=(5,), dtype=dtype('int32'), name='action', minimum=0, maximum=1)
I am trying to use it with a PPO agent as shown below.
ValueError: actor_network output spec does not match action spec:
TensorSpec(shape=(), dtype=tf.int32, name=None)
vs.
BoundedTensorSpec(shape=(5,), dtype=tf.int32, name='action', minimum=array(0, dtype=int32), maximum=array(1, dtype=int32))
Note that actor_distribution_rnn_network.ActorDistributionRnnNetwork when given the action_spec is able to create an output of shape(5,2).
Any suggestion to resolve this would be highly appreciated.
This is similar to #656. But I am making another issue since that issue is still not resolved. Also, @sguada mentioned in #702 that PPO agent can take 1-D action spaces.
I have the following action spec:
BoundedArraySpec(shape=(5,), dtype=dtype('int32'), name='action', minimum=0, maximum=1)
I am trying to use it with a PPO agent as shown below.
However, I keep getting the following error:
Note that
actor_distribution_rnn_network.ActorDistributionRnnNetwork
when given theaction_spec
is able to create an output of shape(5,2).
Any suggestion to resolve this would be highly appreciated.