Closed yixuanhuang98 closed 4 years ago
I’m sorry it took me so long to reply to you. The box world env takes 14x14 as input, the detail. 14x14 is much smaller than 84x84, so I can test my experiments in OSX, and run it on RTX 2080.
Here is the CNN structure I used which is in utils.py
def deepconcise_cnn(scaled_images, **kwargs):
"""
input = [H,W,D] H=W
output = [H-3,W-3,64]
:param scaled_images: (TensorFlow Tensor) Image input placeholder
:param kwargs: (dict) Extra keywords parameters for the convolutional layers of the CNN
:return: (TensorFlow Tensor) The CNN output layer
"""
activ = tf.nn.relu
layer_1 = activ(conv(scaled_images, 'c1', n_filters=32, filter_size=3, stride=1, init_scale=np.sqrt(2), **kwargs))
layer_2 = activ(conv(layer_1, 'c2', n_filters=64, filter_size=2, stride=1, init_scale=np.sqrt(2), **kwargs))
return layer_2
The output = [H-3,W-3,64], so you get 81*81 = 6561, you can design your own cnn to get suitable output. [Multi-Head Dot-Product Attention] computes on entities[feature map], where we assume that the number of entities is equal to the pixels of the feature map. That explains why there were two 6561. Hope this can help you :)
Hi, Thanks for your reply. I get this point since your input image is smaller than mine so I face this memory problem. Furthermore, my research interest mainly lies in the intersection of reinforcement learning and robotics, currently focus on relational reinforcement learning, which is similar to your interest. Would you mind adding a wechat and then we could have the further discussion? My wechat number is 18974259193.
Thanks again for your help! Best, Yixuan
OK, I'll add your wechat tomorrow.
Hi, What's machine do you use to run these experiments? I just use this code to deal with the image with size 84*84 and batch_size = 512, then I just faced the memory error like "ResourceExhaustedError: OOM when allocating tensor with shape[512,2,6561,6561] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu". Also, I think 6561 comes from the multiplication of size and weight but it seems weird why there are two 6561 in this tensor. Another method is to use a smaller image or use deep CNN with a larger stride. Do you have some suggestions based on that?
Thank you!
Best, Yixuan