Open LuisFMCuriel opened 2 years ago
Unfortunately at the moment networks don't have a support for multi dimensional input. There were a few examples on pixels (multi dim input), e.g. https://github.com/laszukdawid/ai-traineree/blob/master/examples/space_invaders_pixel_rainbow.py, but I haven't kept it up to date.
You need to convert that input into a single dim tensor. One way is to flatten everything. However, in some cases that might not be the best solution. Another approach, as shown in the example link, is to pass through convolution layers and then do flattening.
At the moment I can't check anything. I'll try to do it within next 24 hours but thought that you might benefit from earlier reply :)
Sounds good to me. Is there any reason why you use this specific kernel size? conv_net = ConvNet(state_dim, hidden_layers=(30, 30), kernel_sze=(16, 8), max_pool_size=(4, 2), stride=(4, 2), device=device)
Hi, I am working with the example code for the training of multi-agent env. However, when I create each agent, the expected input for the network is one dimension
assert len(in_features) == 1
(see here for FcNet), but my observation space has 3 dimensions:DataSpace(dtype='int8', shape=(6, 7, 2), low=0, high=1)
. How can I make this work? This network is not available for this environment? Should I flatten the input?Thanks.