Closed HadiSDev closed 4 years ago
Hi, I have an environment which can return an image x and a feature vector f
How can I construct my CNN/FN such that my model can take them both: Model(x, f)
My current Q-CNN is just with the image:
class QFunction(chainer.Chain): def __init__(self, n_history=1, n_action=3): super().__init__( l1=L.Convolution2D(n_history, 32, ksize=3, stride=2, nobias=False), l2=L.Convolution2D(32, 64, ksize=3, stride=2, nobias=False), l3=L.Convolution2D(64, 64, ksize=3, stride=2, nobias=False), l4=L.Linear(256, 128), out=L.Linear(n_action) ) def __call__(self, x, test=False): h1 = F.relu(self.l1(x)) h2 = F.relu(self.l2(h1)) h3 = F.relu(self.l3(h2)) h4 = F.relu(self.l4(h3)) h5 = self.out(h4) return chainerrl.action_value.DiscreteActionValue(h5)
You can define your Q-function with __call__ taking a tuple (x, f) as input.
__call__
See the grasping example: https://github.com/chainer/chainerrl/blob/master/examples/grasping/train_dqn_batch_grasping.py#L94
Hi, I have an environment which can return an image x and a feature vector f
How can I construct my CNN/FN such that my model can take them both: Model(x, f)
My current Q-CNN is just with the image: